id
stringlengths 11
95
| author
stringlengths 3
36
| task_category
stringclasses 16
values | tags
sequencelengths 1
4.05k
| created_time
timestamp[s]date 2022-03-02 23:29:04
2025-03-18 02:34:30
| last_modified
timestamp[s]date 2021-05-13 19:09:22
2025-03-18 03:19:02
| downloads
int64 0
15.6M
| likes
int64 0
4.86k
| README
stringlengths 246
1.01M
| matched_task
sequencelengths 1
8
| matched_bigbio_names
sequencelengths 1
8
|
---|---|---|---|---|---|---|---|---|---|---|
lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF | lynxeco | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"base_model:Snowflake/snowflake-arctic-embed-m-v1.5",
"base_model:quantized:Snowflake/snowflake-arctic-embed-m-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-30T00:33:31 | 2024-11-30T00:33:46 | 56 | 0 | ---
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: snowflake-arctic-embed-m-v1.5
results:
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 59.53000000000001
- type: map_at_1
value: 34.282000000000004
- type: map_at_10
value: 50.613
- type: map_at_100
value: 51.269
- type: map_at_1000
value: 51.271
- type: map_at_20
value: 51.158
- type: map_at_3
value: 45.626
- type: map_at_5
value: 48.638
- type: mrr_at_1
value: 34.92176386913229
- type: mrr_at_10
value: 50.856081645555406
- type: mrr_at_100
value: 51.510739437069034
- type: mrr_at_1000
value: 51.51299498830165
- type: mrr_at_20
value: 51.39987941081724
- type: mrr_at_3
value: 45.993361782835514
- type: mrr_at_5
value: 48.88098624940742
- type: nauc_map_at_1000_diff1
value: 10.628675774160785
- type: nauc_map_at_1000_max
value: -10.11742589992339
- type: nauc_map_at_1000_std
value: -18.29277379812427
- type: nauc_map_at_100_diff1
value: 10.63250240035489
- type: nauc_map_at_100_max
value: -10.112078786734363
- type: nauc_map_at_100_std
value: -18.288524872706834
- type: nauc_map_at_10_diff1
value: 10.476494913081712
- type: nauc_map_at_10_max
value: -9.890937746734037
- type: nauc_map_at_10_std
value: -18.279750514750443
- type: nauc_map_at_1_diff1
value: 14.549204048461151
- type: nauc_map_at_1_max
value: -12.230560087701225
- type: nauc_map_at_1_std
value: -19.469903650130362
- type: nauc_map_at_20_diff1
value: 10.586564571825674
- type: nauc_map_at_20_max
value: -10.00292720526217
- type: nauc_map_at_20_std
value: -18.258077347878064
- type: nauc_map_at_3_diff1
value: 10.378663968090372
- type: nauc_map_at_3_max
value: -10.458896171786185
- type: nauc_map_at_3_std
value: -18.38852760333766
- type: nauc_map_at_5_diff1
value: 10.235960275925581
- type: nauc_map_at_5_max
value: -10.239496080409058
- type: nauc_map_at_5_std
value: -18.817023479445886
- type: nauc_mrr_at_1000_diff1
value: 8.718212649575722
- type: nauc_mrr_at_1000_max
value: -10.81022794038691
- type: nauc_mrr_at_1000_std
value: -17.87669499555167
- type: nauc_mrr_at_100_diff1
value: 8.722174171165133
- type: nauc_mrr_at_100_max
value: -10.804840985713525
- type: nauc_mrr_at_100_std
value: -17.872487099359986
- type: nauc_mrr_at_10_diff1
value: 8.609421635870238
- type: nauc_mrr_at_10_max
value: -10.568644717548432
- type: nauc_mrr_at_10_std
value: -17.872968762635814
- type: nauc_mrr_at_1_diff1
value: 12.69590006263834
- type: nauc_mrr_at_1_max
value: -12.082056561238321
- type: nauc_mrr_at_1_std
value: -18.036424092186657
- type: nauc_mrr_at_20_diff1
value: 8.684842497970315
- type: nauc_mrr_at_20_max
value: -10.691578914627286
- type: nauc_mrr_at_20_std
value: -17.84350301434992
- type: nauc_mrr_at_3_diff1
value: 8.649761557556763
- type: nauc_mrr_at_3_max
value: -11.104694428047496
- type: nauc_mrr_at_3_std
value: -18.149917948370344
- type: nauc_mrr_at_5_diff1
value: 8.433489750038396
- type: nauc_mrr_at_5_max
value: -10.917772454397436
- type: nauc_mrr_at_5_std
value: -18.4094211134111
- type: nauc_ndcg_at_1000_diff1
value: 10.19041067807956
- type: nauc_ndcg_at_1000_max
value: -9.54328201605796
- type: nauc_ndcg_at_1000_std
value: -17.824620427456633
- type: nauc_ndcg_at_100_diff1
value: 10.289491087585963
- type: nauc_ndcg_at_100_max
value: -9.357214331420337
- type: nauc_ndcg_at_100_std
value: -17.657600653632873
- type: nauc_ndcg_at_10_diff1
value: 9.435530877596092
- type: nauc_ndcg_at_10_max
value: -8.182581635383546
- type: nauc_ndcg_at_10_std
value: -17.603156479980388
- type: nauc_ndcg_at_1_diff1
value: 14.549204048461151
- type: nauc_ndcg_at_1_max
value: -12.230560087701225
- type: nauc_ndcg_at_1_std
value: -19.469903650130362
- type: nauc_ndcg_at_20_diff1
value: 9.885227087275197
- type: nauc_ndcg_at_20_max
value: -8.52362662391439
- type: nauc_ndcg_at_20_std
value: -17.441705436231764
- type: nauc_ndcg_at_3_diff1
value: 9.22542769998547
- type: nauc_ndcg_at_3_max
value: -9.903590564219288
- type: nauc_ndcg_at_3_std
value: -18.357220221111593
- type: nauc_ndcg_at_5_diff1
value: 8.8756720745828
- type: nauc_ndcg_at_5_max
value: -9.269764943861245
- type: nauc_ndcg_at_5_std
value: -19.009229433187784
- type: nauc_precision_at_1000_diff1
value: 3.733355117431035
- type: nauc_precision_at_1000_max
value: 3.9603571352517393
- type: nauc_precision_at_1000_std
value: 70.07345061131439
- type: nauc_precision_at_100_diff1
value: 29.019032142462457
- type: nauc_precision_at_100_max
value: 40.75153328286103
- type: nauc_precision_at_100_std
value: 62.634249549126594
- type: nauc_precision_at_10_diff1
value: 2.5762677254910353
- type: nauc_precision_at_10_max
value: 6.096298633773051
- type: nauc_precision_at_10_std
value: -11.507400451348587
- type: nauc_precision_at_1_diff1
value: 14.549204048461151
- type: nauc_precision_at_1_max
value: -12.230560087701225
- type: nauc_precision_at_1_std
value: -19.469903650130362
- type: nauc_precision_at_20_diff1
value: 1.715540124567996
- type: nauc_precision_at_20_max
value: 21.53546453945913
- type: nauc_precision_at_20_std
value: 1.537961142195571
- type: nauc_precision_at_3_diff1
value: 5.701850652555737
- type: nauc_precision_at_3_max
value: -8.180345365085552
- type: nauc_precision_at_3_std
value: -18.37033750502482
- type: nauc_precision_at_5_diff1
value: 3.6053552181042843
- type: nauc_precision_at_5_max
value: -5.207647070615612
- type: nauc_precision_at_5_std
value: -19.89491085427258
- type: nauc_recall_at_1000_diff1
value: 3.733355117431255
- type: nauc_recall_at_1000_max
value: 3.9603571352482194
- type: nauc_recall_at_1000_std
value: 70.07345061131205
- type: nauc_recall_at_100_diff1
value: 29.01903214246288
- type: nauc_recall_at_100_max
value: 40.7515332828621
- type: nauc_recall_at_100_std
value: 62.63424954912607
- type: nauc_recall_at_10_diff1
value: 2.5762677254911988
- type: nauc_recall_at_10_max
value: 6.0962986337729905
- type: nauc_recall_at_10_std
value: -11.507400451348577
- type: nauc_recall_at_1_diff1
value: 14.549204048461151
- type: nauc_recall_at_1_max
value: -12.230560087701225
- type: nauc_recall_at_1_std
value: -19.469903650130362
- type: nauc_recall_at_20_diff1
value: 1.7155401245682675
- type: nauc_recall_at_20_max
value: 21.535464539459632
- type: nauc_recall_at_20_std
value: 1.5379611421957025
- type: nauc_recall_at_3_diff1
value: 5.7018506525557875
- type: nauc_recall_at_3_max
value: -8.180345365085538
- type: nauc_recall_at_3_std
value: -18.370337505024796
- type: nauc_recall_at_5_diff1
value: 3.6053552181043913
- type: nauc_recall_at_5_max
value: -5.207647070615579
- type: nauc_recall_at_5_std
value: -19.894910854272492
- type: ndcg_at_1
value: 34.282000000000004
- type: ndcg_at_10
value: 59.53000000000001
- type: ndcg_at_100
value: 62.187000000000005
- type: ndcg_at_1000
value: 62.243
- type: ndcg_at_20
value: 61.451
- type: ndcg_at_3
value: 49.393
- type: ndcg_at_5
value: 54.771
- type: precision_at_1
value: 34.282000000000004
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.769
- type: precision_at_3
value: 20.104
- type: precision_at_5
value: 14.651
- type: recall_at_1
value: 34.282000000000004
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.21799999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 95.377
- type: recall_at_3
value: 60.313
- type: recall_at_5
value: 73.257
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 53.885000000000005
- type: map_at_1
value: 35.429
- type: map_at_10
value: 47.469
- type: map_at_100
value: 48.997
- type: map_at_1000
value: 49.117
- type: map_at_20
value: 48.324
- type: map_at_3
value: 43.835
- type: map_at_5
value: 46.043
- type: mrr_at_1
value: 43.34763948497854
- type: mrr_at_10
value: 53.258623430297234
- type: mrr_at_100
value: 53.99123884299005
- type: mrr_at_1000
value: 54.02458101713216
- type: mrr_at_20
value: 53.695964669618945
- type: mrr_at_3
value: 50.81068192656173
- type: mrr_at_5
value: 52.45588936576058
- type: nauc_map_at_1000_diff1
value: 51.55382824218782
- type: nauc_map_at_1000_max
value: 31.855350695084606
- type: nauc_map_at_1000_std
value: -5.465862008150992
- type: nauc_map_at_100_diff1
value: 51.55889312452534
- type: nauc_map_at_100_max
value: 31.88429637207401
- type: nauc_map_at_100_std
value: -5.40805152544196
- type: nauc_map_at_10_diff1
value: 51.6592677505875
- type: nauc_map_at_10_max
value: 31.554425233617543
- type: nauc_map_at_10_std
value: -6.125756131339046
- type: nauc_map_at_1_diff1
value: 55.6889617582672
- type: nauc_map_at_1_max
value: 27.821166966868176
- type: nauc_map_at_1_std
value: -5.778838498211728
- type: nauc_map_at_20_diff1
value: 51.70520970992564
- type: nauc_map_at_20_max
value: 31.811676633900465
- type: nauc_map_at_20_std
value: -5.463596751904718
- type: nauc_map_at_3_diff1
value: 53.206169626589606
- type: nauc_map_at_3_max
value: 31.64373830824983
- type: nauc_map_at_3_std
value: -6.054761451312827
- type: nauc_map_at_5_diff1
value: 52.37308971673694
- type: nauc_map_at_5_max
value: 31.974302019633644
- type: nauc_map_at_5_std
value: -6.302653399940531
- type: nauc_mrr_at_1000_diff1
value: 49.345152231490616
- type: nauc_mrr_at_1000_max
value: 33.49789501712511
- type: nauc_mrr_at_1000_std
value: -6.054730861163538
- type: nauc_mrr_at_100_diff1
value: 49.3387577601307
- type: nauc_mrr_at_100_max
value: 33.48149992464187
- type: nauc_mrr_at_100_std
value: -6.061177137579308
- type: nauc_mrr_at_10_diff1
value: 49.08312288449718
- type: nauc_mrr_at_10_max
value: 33.470393322577465
- type: nauc_mrr_at_10_std
value: -6.180286430216975
- type: nauc_mrr_at_1_diff1
value: 52.43364978537192
- type: nauc_mrr_at_1_max
value: 31.521755633355713
- type: nauc_mrr_at_1_std
value: -7.002499524130836
- type: nauc_mrr_at_20_diff1
value: 49.311059224991766
- type: nauc_mrr_at_20_max
value: 33.538523037692144
- type: nauc_mrr_at_20_std
value: -6.034619474981136
- type: nauc_mrr_at_3_diff1
value: 49.90489868439366
- type: nauc_mrr_at_3_max
value: 34.400493912164606
- type: nauc_mrr_at_3_std
value: -6.028875320994629
- type: nauc_mrr_at_5_diff1
value: 49.033661898983475
- type: nauc_mrr_at_5_max
value: 33.732315350193936
- type: nauc_mrr_at_5_std
value: -6.272548556330368
- type: nauc_ndcg_at_1000_diff1
value: 49.81681892539247
- type: nauc_ndcg_at_1000_max
value: 33.06518006062093
- type: nauc_ndcg_at_1000_std
value: -4.282105713014755
- type: nauc_ndcg_at_100_diff1
value: 49.42362108857786
- type: nauc_ndcg_at_100_max
value: 32.92024325540483
- type: nauc_ndcg_at_100_std
value: -3.7786765305496717
- type: nauc_ndcg_at_10_diff1
value: 48.83102435475594
- type: nauc_ndcg_at_10_max
value: 31.898404563611958
- type: nauc_ndcg_at_10_std
value: -6.2024003866707
- type: nauc_ndcg_at_1_diff1
value: 52.43364978537192
- type: nauc_ndcg_at_1_max
value: 31.521755633355713
- type: nauc_ndcg_at_1_std
value: -7.002499524130836
- type: nauc_ndcg_at_20_diff1
value: 49.466526454438316
- type: nauc_ndcg_at_20_max
value: 32.424462698701674
- type: nauc_ndcg_at_20_std
value: -4.520809563712905
- type: nauc_ndcg_at_3_diff1
value: 50.997884562583884
- type: nauc_ndcg_at_3_max
value: 33.26787046916917
- type: nauc_ndcg_at_3_std
value: -6.340699471083753
- type: nauc_ndcg_at_5_diff1
value: 49.68314458398097
- type: nauc_ndcg_at_5_max
value: 32.80910071143984
- type: nauc_ndcg_at_5_std
value: -6.734495576445887
- type: nauc_precision_at_1000_diff1
value: -24.18940012795299
- type: nauc_precision_at_1000_max
value: -10.995343674356896
- type: nauc_precision_at_1000_std
value: -8.298841004724856
- type: nauc_precision_at_100_diff1
value: -18.104939577865935
- type: nauc_precision_at_100_max
value: -1.3757613100627637
- type: nauc_precision_at_100_std
value: 0.07661922190466432
- type: nauc_precision_at_10_diff1
value: 3.9624459059275967
- type: nauc_precision_at_10_max
value: 14.841561593450391
- type: nauc_precision_at_10_std
value: -2.485374333613117
- type: nauc_precision_at_1_diff1
value: 52.43364978537192
- type: nauc_precision_at_1_max
value: 31.521755633355713
- type: nauc_precision_at_1_std
value: -7.002499524130836
- type: nauc_precision_at_20_diff1
value: -4.4791763436505265
- type: nauc_precision_at_20_max
value: 9.157872836996276
- type: nauc_precision_at_20_std
value: 2.086903518342088
- type: nauc_precision_at_3_diff1
value: 28.480888018235568
- type: nauc_precision_at_3_max
value: 30.34526267718485
- type: nauc_precision_at_3_std
value: -6.3006706923866025
- type: nauc_precision_at_5_diff1
value: 16.488039195453517
- type: nauc_precision_at_5_max
value: 24.593477099241852
- type: nauc_precision_at_5_std
value: -5.316448107840636
- type: nauc_recall_at_1000_diff1
value: 34.715187316533076
- type: nauc_recall_at_1000_max
value: 58.2266544684947
- type: nauc_recall_at_1000_std
value: 63.85237636398278
- type: nauc_recall_at_100_diff1
value: 36.08623826028132
- type: nauc_recall_at_100_max
value: 33.05011429439473
- type: nauc_recall_at_100_std
value: 16.559545021212564
- type: nauc_recall_at_10_diff1
value: 39.76738610714205
- type: nauc_recall_at_10_max
value: 28.233045706945997
- type: nauc_recall_at_10_std
value: -5.13243784043598
- type: nauc_recall_at_1_diff1
value: 55.6889617582672
- type: nauc_recall_at_1_max
value: 27.821166966868176
- type: nauc_recall_at_1_std
value: -5.778838498211728
- type: nauc_recall_at_20_diff1
value: 41.18682480073759
- type: nauc_recall_at_20_max
value: 29.525993239296945
- type: nauc_recall_at_20_std
value: 1.5003598438954298
- type: nauc_recall_at_3_diff1
value: 48.31879460301157
- type: nauc_recall_at_3_max
value: 32.93751306970167
- type: nauc_recall_at_3_std
value: -5.28070084211707
- type: nauc_recall_at_5_diff1
value: 44.327686388315435
- type: nauc_recall_at_5_max
value: 32.04823486234599
- type: nauc_recall_at_5_std
value: -6.4221525602778256
- type: ndcg_at_1
value: 43.348
- type: ndcg_at_10
value: 53.885000000000005
- type: ndcg_at_100
value: 59.204
- type: ndcg_at_1000
value: 60.744
- type: ndcg_at_20
value: 55.995
- type: ndcg_at_3
value: 49.112
- type: ndcg_at_5
value: 51.61900000000001
- type: precision_at_1
value: 43.348
- type: precision_at_10
value: 10.242999999999999
- type: precision_at_100
value: 1.6150000000000002
- type: precision_at_1000
value: 0.203
- type: precision_at_20
value: 6.066
- type: precision_at_3
value: 23.605
- type: precision_at_5
value: 17.024
- type: recall_at_1
value: 35.429
- type: recall_at_10
value: 65.77199999999999
- type: recall_at_100
value: 87.89
- type: recall_at_1000
value: 97.13000000000001
- type: recall_at_20
value: 73.299
- type: recall_at_3
value: 52.034000000000006
- type: recall_at_5
value: 58.96
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 49.55
- type: map_at_1
value: 31.684
- type: map_at_10
value: 43.258
- type: map_at_100
value: 44.628
- type: map_at_1000
value: 44.761
- type: map_at_20
value: 44.015
- type: map_at_3
value: 39.778000000000006
- type: map_at_5
value: 41.643
- type: mrr_at_1
value: 39.87261146496815
- type: mrr_at_10
value: 49.31978566373469
- type: mrr_at_100
value: 49.94922739445482
- type: mrr_at_1000
value: 49.990325601254106
- type: mrr_at_20
value: 49.70597468576704
- type: mrr_at_3
value: 47.070063694267546
- type: mrr_at_5
value: 48.23248407643316
- type: nauc_map_at_1000_diff1
value: 53.44044712371752
- type: nauc_map_at_1000_max
value: 34.5651440062204
- type: nauc_map_at_1000_std
value: -0.9814384609230475
- type: nauc_map_at_100_diff1
value: 53.429004435388464
- type: nauc_map_at_100_max
value: 34.52038957273436
- type: nauc_map_at_100_std
value: -1.1021936362699805
- type: nauc_map_at_10_diff1
value: 53.879128574022005
- type: nauc_map_at_10_max
value: 33.74771524140917
- type: nauc_map_at_10_std
value: -2.945132777205236
- type: nauc_map_at_1_diff1
value: 60.25159799695403
- type: nauc_map_at_1_max
value: 26.843892985235808
- type: nauc_map_at_1_std
value: -9.618702739509093
- type: nauc_map_at_20_diff1
value: 53.56789898225283
- type: nauc_map_at_20_max
value: 34.11628845872402
- type: nauc_map_at_20_std
value: -2.024376635870884
- type: nauc_map_at_3_diff1
value: 54.45882099014072
- type: nauc_map_at_3_max
value: 31.29495446507793
- type: nauc_map_at_3_std
value: -6.391948228781555
- type: nauc_map_at_5_diff1
value: 54.20536489050697
- type: nauc_map_at_5_max
value: 32.31001487256826
- type: nauc_map_at_5_std
value: -5.050953263346934
- type: nauc_mrr_at_1000_diff1
value: 50.835858995999125
- type: nauc_mrr_at_1000_max
value: 38.20717381701079
- type: nauc_mrr_at_1000_std
value: 4.174163368228787
- type: nauc_mrr_at_100_diff1
value: 50.827072441041224
- type: nauc_mrr_at_100_max
value: 38.21077622034756
- type: nauc_mrr_at_100_std
value: 4.1951082737013365
- type: nauc_mrr_at_10_diff1
value: 50.90578491570948
- type: nauc_mrr_at_10_max
value: 38.19229691746408
- type: nauc_mrr_at_10_std
value: 3.8290750066335546
- type: nauc_mrr_at_1_diff1
value: 54.807021746871186
- type: nauc_mrr_at_1_max
value: 37.09225642043841
- type: nauc_mrr_at_1_std
value: 0.5654547513131355
- type: nauc_mrr_at_20_diff1
value: 50.86247832095378
- type: nauc_mrr_at_20_max
value: 38.19277867384178
- type: nauc_mrr_at_20_std
value: 4.098932316791841
- type: nauc_mrr_at_3_diff1
value: 50.788934370903036
- type: nauc_mrr_at_3_max
value: 37.72130561895659
- type: nauc_mrr_at_3_std
value: 2.7339370381517583
- type: nauc_mrr_at_5_diff1
value: 50.72543792525547
- type: nauc_mrr_at_5_max
value: 37.57740908475375
- type: nauc_mrr_at_5_std
value: 2.742881431085094
- type: nauc_ndcg_at_1000_diff1
value: 50.89692885407576
- type: nauc_ndcg_at_1000_max
value: 37.250583054716955
- type: nauc_ndcg_at_1000_std
value: 5.552279826578831
- type: nauc_ndcg_at_100_diff1
value: 50.624606875496944
- type: nauc_ndcg_at_100_max
value: 37.1024514234627
- type: nauc_ndcg_at_100_std
value: 5.495892760032762
- type: nauc_ndcg_at_10_diff1
value: 51.910387255793445
- type: nauc_ndcg_at_10_max
value: 36.71168418905039
- type: nauc_ndcg_at_10_std
value: 2.3064115117905217
- type: nauc_ndcg_at_1_diff1
value: 54.807021746871186
- type: nauc_ndcg_at_1_max
value: 37.09225642043841
- type: nauc_ndcg_at_1_std
value: 0.5654547513131355
- type: nauc_ndcg_at_20_diff1
value: 51.43416588546778
- type: nauc_ndcg_at_20_max
value: 36.76387180172346
- type: nauc_ndcg_at_20_std
value: 3.7012798827049718
- type: nauc_ndcg_at_3_diff1
value: 50.91198494475423
- type: nauc_ndcg_at_3_max
value: 34.92770670756687
- type: nauc_ndcg_at_3_std
value: -0.9071486759887368
- type: nauc_ndcg_at_5_diff1
value: 51.63559468683886
- type: nauc_ndcg_at_5_max
value: 34.86849679864564
- type: nauc_ndcg_at_5_std
value: -0.734837221224976
- type: nauc_precision_at_1000_diff1
value: -13.43645457127175
- type: nauc_precision_at_1000_max
value: 12.71162105198664
- type: nauc_precision_at_1000_std
value: 33.175399007040255
- type: nauc_precision_at_100_diff1
value: -8.549834785105412
- type: nauc_precision_at_100_max
value: 22.47383497331883
- type: nauc_precision_at_100_std
value: 39.09108761430844
- type: nauc_precision_at_10_diff1
value: 7.556572451100043
- type: nauc_precision_at_10_max
value: 35.35285122987575
- type: nauc_precision_at_10_std
value: 29.417466305615967
- type: nauc_precision_at_1_diff1
value: 54.807021746871186
- type: nauc_precision_at_1_max
value: 37.09225642043841
- type: nauc_precision_at_1_std
value: 0.5654547513131355
- type: nauc_precision_at_20_diff1
value: -0.550158641635712
- type: nauc_precision_at_20_max
value: 29.9068430006187
- type: nauc_precision_at_20_std
value: 33.920603132821185
- type: nauc_precision_at_3_diff1
value: 25.551264664276687
- type: nauc_precision_at_3_max
value: 37.59463225854679
- type: nauc_precision_at_3_std
value: 13.707295021359043
- type: nauc_precision_at_5_diff1
value: 17.76136129817151
- type: nauc_precision_at_5_max
value: 35.85363807255972
- type: nauc_precision_at_5_std
value: 19.48470876841111
- type: nauc_recall_at_1000_diff1
value: 37.1593620123866
- type: nauc_recall_at_1000_max
value: 46.29322536951135
- type: nauc_recall_at_1000_std
value: 51.47312657083967
- type: nauc_recall_at_100_diff1
value: 37.7542224949536
- type: nauc_recall_at_100_max
value: 38.84120637703135
- type: nauc_recall_at_100_std
value: 28.839672572221925
- type: nauc_recall_at_10_diff1
value: 46.24130302658384
- type: nauc_recall_at_10_max
value: 35.89001724712849
- type: nauc_recall_at_10_std
value: 6.985137790828618
- type: nauc_recall_at_1_diff1
value: 60.25159799695403
- type: nauc_recall_at_1_max
value: 26.843892985235808
- type: nauc_recall_at_1_std
value: -9.618702739509093
- type: nauc_recall_at_20_diff1
value: 43.63576680886187
- type: nauc_recall_at_20_max
value: 36.79079644708101
- type: nauc_recall_at_20_std
value: 13.81561928605839
- type: nauc_recall_at_3_diff1
value: 48.2299322140522
- type: nauc_recall_at_3_max
value: 30.038088484376203
- type: nauc_recall_at_3_std
value: -4.871116183843762
- type: nauc_recall_at_5_diff1
value: 47.22331872695983
- type: nauc_recall_at_5_max
value: 30.398541477173136
- type: nauc_recall_at_5_std
value: -3.2038541888528957
- type: ndcg_at_1
value: 39.873
- type: ndcg_at_10
value: 49.55
- type: ndcg_at_100
value: 53.809
- type: ndcg_at_1000
value: 55.767999999999994
- type: ndcg_at_20
value: 51.275999999999996
- type: ndcg_at_3
value: 44.91
- type: ndcg_at_5
value: 46.855999999999995
- type: precision_at_1
value: 39.873
- type: precision_at_10
value: 9.65
- type: precision_at_100
value: 1.522
- type: precision_at_1000
value: 0.196
- type: precision_at_20
value: 5.701
- type: precision_at_3
value: 22.166
- type: precision_at_5
value: 15.643
- type: recall_at_1
value: 31.684
- type: recall_at_10
value: 60.69
- type: recall_at_100
value: 78.521
- type: recall_at_1000
value: 91.02900000000001
- type: recall_at_20
value: 66.973
- type: recall_at_3
value: 46.807
- type: recall_at_5
value: 52.402
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 62.686
- type: map_at_1
value: 43.856
- type: map_at_10
value: 57.056
- type: map_at_100
value: 58.048
- type: map_at_1000
value: 58.092
- type: map_at_20
value: 57.684000000000005
- type: map_at_3
value: 53.958
- type: map_at_5
value: 55.80500000000001
- type: mrr_at_1
value: 50.03134796238244
- type: mrr_at_10
value: 60.31022043091019
- type: mrr_at_100
value: 60.91892338857461
- type: mrr_at_1000
value: 60.93770463536649
- type: mrr_at_20
value: 60.705642387392736
- type: mrr_at_3
value: 58.286311389759746
- type: mrr_at_5
value: 59.49320794148393
- type: nauc_map_at_1000_diff1
value: 54.849140197256695
- type: nauc_map_at_1000_max
value: 38.978448968260224
- type: nauc_map_at_1000_std
value: 0.4955439383268162
- type: nauc_map_at_100_diff1
value: 54.824334747823364
- type: nauc_map_at_100_max
value: 38.959443109450994
- type: nauc_map_at_100_std
value: 0.49626092018886037
- type: nauc_map_at_10_diff1
value: 54.778189277103394
- type: nauc_map_at_10_max
value: 38.20972191654546
- type: nauc_map_at_10_std
value: -0.7239823837455759
- type: nauc_map_at_1_diff1
value: 58.74017164752485
- type: nauc_map_at_1_max
value: 31.528974862589585
- type: nauc_map_at_1_std
value: -3.273824691929492
- type: nauc_map_at_20_diff1
value: 54.78943693416187
- type: nauc_map_at_20_max
value: 38.77930316443076
- type: nauc_map_at_20_std
value: 0.25607460088355544
- type: nauc_map_at_3_diff1
value: 55.68313410225767
- type: nauc_map_at_3_max
value: 36.22847284104399
- type: nauc_map_at_3_std
value: -3.010979639100503
- type: nauc_map_at_5_diff1
value: 55.11385094420661
- type: nauc_map_at_5_max
value: 37.319681045490924
- type: nauc_map_at_5_std
value: -2.156640733221061
- type: nauc_mrr_at_1000_diff1
value: 54.504759468380705
- type: nauc_mrr_at_1000_max
value: 40.58849492650406
- type: nauc_mrr_at_1000_std
value: 1.8226622175866118
- type: nauc_mrr_at_100_diff1
value: 54.4918034449886
- type: nauc_mrr_at_100_max
value: 40.59202728933427
- type: nauc_mrr_at_100_std
value: 1.8276428096536335
- type: nauc_mrr_at_10_diff1
value: 54.33603399493329
- type: nauc_mrr_at_10_max
value: 40.58896878978089
- type: nauc_mrr_at_10_std
value: 1.5733340909114375
- type: nauc_mrr_at_1_diff1
value: 58.062410036466105
- type: nauc_mrr_at_1_max
value: 37.660958859966506
- type: nauc_mrr_at_1_std
value: 0.029007600674170648
- type: nauc_mrr_at_20_diff1
value: 54.43793386924358
- type: nauc_mrr_at_20_max
value: 40.66773423875307
- type: nauc_mrr_at_20_std
value: 1.891967891797154
- type: nauc_mrr_at_3_diff1
value: 54.77901284537966
- type: nauc_mrr_at_3_max
value: 40.182219821206964
- type: nauc_mrr_at_3_std
value: 0.8911935034597871
- type: nauc_mrr_at_5_diff1
value: 54.466068837163675
- type: nauc_mrr_at_5_max
value: 40.334996916684126
- type: nauc_mrr_at_5_std
value: 0.9460830492892364
- type: nauc_ndcg_at_1000_diff1
value: 53.8465376860938
- type: nauc_ndcg_at_1000_max
value: 41.63158111016696
- type: nauc_ndcg_at_1000_std
value: 3.864205884257578
- type: nauc_ndcg_at_100_diff1
value: 53.4025864436944
- type: nauc_ndcg_at_100_max
value: 41.805453995307914
- type: nauc_ndcg_at_100_std
value: 4.36777557904857
- type: nauc_ndcg_at_10_diff1
value: 52.96034987157544
- type: nauc_ndcg_at_10_max
value: 40.7601173480795
- type: nauc_ndcg_at_10_std
value: 1.905824035879141
- type: nauc_ndcg_at_1_diff1
value: 58.062410036466105
- type: nauc_ndcg_at_1_max
value: 37.660958859966506
- type: nauc_ndcg_at_1_std
value: 0.029007600674170648
- type: nauc_ndcg_at_20_diff1
value: 53.2834771889242
- type: nauc_ndcg_at_20_max
value: 41.713541932946406
- type: nauc_ndcg_at_20_std
value: 3.865102828793311
- type: nauc_ndcg_at_3_diff1
value: 54.03389464372289
- type: nauc_ndcg_at_3_max
value: 38.41449914649933
- type: nauc_ndcg_at_3_std
value: -0.886276189886313
- type: nauc_ndcg_at_5_diff1
value: 53.456413320299
- type: nauc_ndcg_at_5_max
value: 39.49048882649335
- type: nauc_ndcg_at_5_std
value: -0.42692690160443814
- type: nauc_precision_at_1000_diff1
value: -14.770791653274824
- type: nauc_precision_at_1000_max
value: 21.479874538905246
- type: nauc_precision_at_1000_std
value: 28.607024261300207
- type: nauc_precision_at_100_diff1
value: -12.189696449878126
- type: nauc_precision_at_100_max
value: 26.69785787492456
- type: nauc_precision_at_100_std
value: 33.59098307467553
- type: nauc_precision_at_10_diff1
value: 6.922968330978399
- type: nauc_precision_at_10_max
value: 34.52138344123087
- type: nauc_precision_at_10_std
value: 21.768427637079952
- type: nauc_precision_at_1_diff1
value: 58.062410036466105
- type: nauc_precision_at_1_max
value: 37.660958859966506
- type: nauc_precision_at_1_std
value: 0.029007600674170648
- type: nauc_precision_at_20_diff1
value: -0.6837867902179278
- type: nauc_precision_at_20_max
value: 33.98683709011133
- type: nauc_precision_at_20_std
value: 30.8845561918902
- type: nauc_precision_at_3_diff1
value: 28.195043041120847
- type: nauc_precision_at_3_max
value: 37.659916094938836
- type: nauc_precision_at_3_std
value: 7.226520146634867
- type: nauc_precision_at_5_diff1
value: 16.633667288096245
- type: nauc_precision_at_5_max
value: 34.90176597404891
- type: nauc_precision_at_5_std
value: 12.421585442334088
- type: nauc_recall_at_1000_diff1
value: 45.20743732415397
- type: nauc_recall_at_1000_max
value: 72.77115913579242
- type: nauc_recall_at_1000_std
value: 70.48328496679083
- type: nauc_recall_at_100_diff1
value: 38.56282680810794
- type: nauc_recall_at_100_max
value: 55.46797683321103
- type: nauc_recall_at_100_std
value: 36.878791151929136
- type: nauc_recall_at_10_diff1
value: 44.18252051452362
- type: nauc_recall_at_10_max
value: 43.33391810040086
- type: nauc_recall_at_10_std
value: 6.663378192277723
- type: nauc_recall_at_1_diff1
value: 58.74017164752485
- type: nauc_recall_at_1_max
value: 31.528974862589585
- type: nauc_recall_at_1_std
value: -3.273824691929492
- type: nauc_recall_at_20_diff1
value: 44.19944231642417
- type: nauc_recall_at_20_max
value: 49.401101483915866
- type: nauc_recall_at_20_std
value: 18.97803841673839
- type: nauc_recall_at_3_diff1
value: 49.56378985428704
- type: nauc_recall_at_3_max
value: 36.434210616870224
- type: nauc_recall_at_3_std
value: -2.850559971607616
- type: nauc_recall_at_5_diff1
value: 47.37107217086109
- type: nauc_recall_at_5_max
value: 39.0236745509895
- type: nauc_recall_at_5_std
value: -1.7402454457937195
- type: ndcg_at_1
value: 50.031000000000006
- type: ndcg_at_10
value: 62.686
- type: ndcg_at_100
value: 66.403
- type: ndcg_at_1000
value: 67.241
- type: ndcg_at_20
value: 64.37899999999999
- type: ndcg_at_3
value: 57.859
- type: ndcg_at_5
value: 60.375
- type: precision_at_1
value: 50.031000000000006
- type: precision_at_10
value: 9.856
- type: precision_at_100
value: 1.266
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_20
value: 5.489
- type: precision_at_3
value: 25.746999999999996
- type: precision_at_5
value: 17.492
- type: recall_at_1
value: 43.856
- type: recall_at_10
value: 75.824
- type: recall_at_100
value: 91.622
- type: recall_at_1000
value: 97.538
- type: recall_at_20
value: 81.951
- type: recall_at_3
value: 63.016000000000005
- type: recall_at_5
value: 69.18299999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 43.983
- type: map_at_1
value: 28.942
- type: map_at_10
value: 38.621
- type: map_at_100
value: 39.7
- type: map_at_1000
value: 39.766
- type: map_at_20
value: 39.262
- type: map_at_3
value: 35.719
- type: map_at_5
value: 37.378
- type: mrr_at_1
value: 31.29943502824859
- type: mrr_at_10
value: 40.76463994260603
- type: mrr_at_100
value: 41.67073617629083
- type: mrr_at_1000
value: 41.717446259457105
- type: mrr_at_20
value: 41.32577374689195
- type: mrr_at_3
value: 37.984934086628996
- type: mrr_at_5
value: 39.64595103578152
- type: nauc_map_at_1000_diff1
value: 43.64461679688985
- type: nauc_map_at_1000_max
value: 31.53717883948204
- type: nauc_map_at_1000_std
value: 1.193745788248017
- type: nauc_map_at_100_diff1
value: 43.63847825079489
- type: nauc_map_at_100_max
value: 31.536602619279165
- type: nauc_map_at_100_std
value: 1.2001240243342401
- type: nauc_map_at_10_diff1
value: 43.845991987142014
- type: nauc_map_at_10_max
value: 31.27509937344113
- type: nauc_map_at_10_std
value: 0.7327934840520994
- type: nauc_map_at_1_diff1
value: 50.62269273984579
- type: nauc_map_at_1_max
value: 30.16325757909521
- type: nauc_map_at_1_std
value: -0.6398875136233392
- type: nauc_map_at_20_diff1
value: 43.630758403790914
- type: nauc_map_at_20_max
value: 31.408258098047703
- type: nauc_map_at_20_std
value: 1.12616034652217
- type: nauc_map_at_3_diff1
value: 44.823493567359456
- type: nauc_map_at_3_max
value: 31.075886347614496
- type: nauc_map_at_3_std
value: -0.25126874515735426
- type: nauc_map_at_5_diff1
value: 43.79768853087658
- type: nauc_map_at_5_max
value: 31.091080995725324
- type: nauc_map_at_5_std
value: 0.16440771782544047
- type: nauc_mrr_at_1000_diff1
value: 42.7865400752329
- type: nauc_mrr_at_1000_max
value: 32.84731670326893
- type: nauc_mrr_at_1000_std
value: 2.6067637582013825
- type: nauc_mrr_at_100_diff1
value: 42.771741548331065
- type: nauc_mrr_at_100_max
value: 32.85324232845987
- type: nauc_mrr_at_100_std
value: 2.6092786694308376
- type: nauc_mrr_at_10_diff1
value: 42.82969738870672
- type: nauc_mrr_at_10_max
value: 32.69407549631432
- type: nauc_mrr_at_10_std
value: 2.302903910016054
- type: nauc_mrr_at_1_diff1
value: 49.05638333657571
- type: nauc_mrr_at_1_max
value: 33.12030717171514
- type: nauc_mrr_at_1_std
value: 1.3278035087690774
- type: nauc_mrr_at_20_diff1
value: 42.74267239536286
- type: nauc_mrr_at_20_max
value: 32.78571108973092
- type: nauc_mrr_at_20_std
value: 2.5932669908758643
- type: nauc_mrr_at_3_diff1
value: 43.69963426089187
- type: nauc_mrr_at_3_max
value: 32.78193126956233
- type: nauc_mrr_at_3_std
value: 1.634874463134699
- type: nauc_mrr_at_5_diff1
value: 42.838630647832524
- type: nauc_mrr_at_5_max
value: 32.459318735260545
- type: nauc_mrr_at_5_std
value: 1.9412518283209172
- type: nauc_ndcg_at_1000_diff1
value: 41.01253839851583
- type: nauc_ndcg_at_1000_max
value: 32.69570568894237
- type: nauc_ndcg_at_1000_std
value: 3.4254737113410343
- type: nauc_ndcg_at_100_diff1
value: 40.62589243745832
- type: nauc_ndcg_at_100_max
value: 32.664990655736126
- type: nauc_ndcg_at_100_std
value: 3.799569445326048
- type: nauc_ndcg_at_10_diff1
value: 41.31658753735306
- type: nauc_ndcg_at_10_max
value: 31.511946320339295
- type: nauc_ndcg_at_10_std
value: 2.0492930500796662
- type: nauc_ndcg_at_1_diff1
value: 49.05638333657571
- type: nauc_ndcg_at_1_max
value: 33.12030717171514
- type: nauc_ndcg_at_1_std
value: 1.3278035087690774
- type: nauc_ndcg_at_20_diff1
value: 40.66188223212841
- type: nauc_ndcg_at_20_max
value: 31.926240431497476
- type: nauc_ndcg_at_20_std
value: 3.370398664595343
- type: nauc_ndcg_at_3_diff1
value: 43.035580180241
- type: nauc_ndcg_at_3_max
value: 31.363874129878404
- type: nauc_ndcg_at_3_std
value: 0.1422507242819929
- type: nauc_ndcg_at_5_diff1
value: 41.29049003955878
- type: nauc_ndcg_at_5_max
value: 31.112034994977737
- type: nauc_ndcg_at_5_std
value: 0.860179279828966
- type: nauc_precision_at_1000_diff1
value: -12.41854465881981
- type: nauc_precision_at_1000_max
value: 14.706779246590548
- type: nauc_precision_at_1000_std
value: 9.812804367375206
- type: nauc_precision_at_100_diff1
value: 2.797520107808461
- type: nauc_precision_at_100_max
value: 24.335873541811406
- type: nauc_precision_at_100_std
value: 12.87186398750545
- type: nauc_precision_at_10_diff1
value: 24.530962799265847
- type: nauc_precision_at_10_max
value: 31.00772010798733
- type: nauc_precision_at_10_std
value: 6.696733001548185
- type: nauc_precision_at_1_diff1
value: 49.05638333657571
- type: nauc_precision_at_1_max
value: 33.12030717171514
- type: nauc_precision_at_1_std
value: 1.3278035087690774
- type: nauc_precision_at_20_diff1
value: 16.25028416351204
- type: nauc_precision_at_20_max
value: 29.629326492027342
- type: nauc_precision_at_20_std
value: 11.085888573121679
- type: nauc_precision_at_3_diff1
value: 33.923667689694256
- type: nauc_precision_at_3_max
value: 33.5859782361996
- type: nauc_precision_at_3_std
value: 1.9468331086918693
- type: nauc_precision_at_5_diff1
value: 27.917827233088875
- type: nauc_precision_at_5_max
value: 33.13290043423535
- type: nauc_precision_at_5_std
value: 3.800870695945311
- type: nauc_recall_at_1000_diff1
value: 9.680283388428789
- type: nauc_recall_at_1000_max
value: 49.479399284871235
- type: nauc_recall_at_1000_std
value: 31.506985071436088
- type: nauc_recall_at_100_diff1
value: 23.607673377885448
- type: nauc_recall_at_100_max
value: 36.637750366403935
- type: nauc_recall_at_100_std
value: 18.30770690564224
- type: nauc_recall_at_10_diff1
value: 33.199683418312446
- type: nauc_recall_at_10_max
value: 29.63115497012312
- type: nauc_recall_at_10_std
value: 4.813200391480566
- type: nauc_recall_at_1_diff1
value: 50.62269273984579
- type: nauc_recall_at_1_max
value: 30.16325757909521
- type: nauc_recall_at_1_std
value: -0.6398875136233392
- type: nauc_recall_at_20_diff1
value: 29.16488387844995
- type: nauc_recall_at_20_max
value: 30.788019479459
- type: nauc_recall_at_20_std
value: 11.031953917298853
- type: nauc_recall_at_3_diff1
value: 38.215351600417065
- type: nauc_recall_at_3_max
value: 29.619887154236128
- type: nauc_recall_at_3_std
value: -0.13237298980339363
- type: nauc_recall_at_5_diff1
value: 33.93788042633265
- type: nauc_recall_at_5_max
value: 28.67185092656741
- type: nauc_recall_at_5_std
value: 1.316700201091445
- type: ndcg_at_1
value: 31.299
- type: ndcg_at_10
value: 43.983
- type: ndcg_at_100
value: 48.992999999999995
- type: ndcg_at_1000
value: 50.757
- type: ndcg_at_20
value: 46.152
- type: ndcg_at_3
value: 38.367000000000004
- type: ndcg_at_5
value: 41.171
- type: precision_at_1
value: 31.299
- type: precision_at_10
value: 6.734
- type: precision_at_100
value: 0.972
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_20
value: 3.898
- type: precision_at_3
value: 16.121
- type: precision_at_5
value: 11.344999999999999
- type: recall_at_1
value: 28.942
- type: recall_at_10
value: 58.343999999999994
- type: recall_at_100
value: 80.82300000000001
- type: recall_at_1000
value: 94.348
- type: recall_at_20
value: 66.449
- type: recall_at_3
value: 43.415
- type: recall_at_5
value: 50.007999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 33.144
- type: map_at_1
value: 19.41
- type: map_at_10
value: 27.802
- type: map_at_100
value: 29.157
- type: map_at_1000
value: 29.274
- type: map_at_20
value: 28.549000000000003
- type: map_at_3
value: 25.052999999999997
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.756218905472636
- type: mrr_at_10
value: 32.3623450209271
- type: mrr_at_100
value: 33.3648208444617
- type: mrr_at_1000
value: 33.427688215162185
- type: mrr_at_20
value: 32.93723485575758
- type: mrr_at_3
value: 29.539800995024883
- type: mrr_at_5
value: 31.156716417910452
- type: nauc_map_at_1000_diff1
value: 36.196391248081284
- type: nauc_map_at_1000_max
value: 25.650644367091495
- type: nauc_map_at_1000_std
value: 6.130340697729844
- type: nauc_map_at_100_diff1
value: 36.138890642411376
- type: nauc_map_at_100_max
value: 25.587124763888518
- type: nauc_map_at_100_std
value: 6.129336379055536
- type: nauc_map_at_10_diff1
value: 36.254426743566775
- type: nauc_map_at_10_max
value: 25.465599906543034
- type: nauc_map_at_10_std
value: 5.880280378112879
- type: nauc_map_at_1_diff1
value: 42.890551563179976
- type: nauc_map_at_1_max
value: 25.813805281076956
- type: nauc_map_at_1_std
value: 5.150718386163028
- type: nauc_map_at_20_diff1
value: 35.98551587974314
- type: nauc_map_at_20_max
value: 25.501540521726636
- type: nauc_map_at_20_std
value: 5.858703157458749
- type: nauc_map_at_3_diff1
value: 37.646558039577734
- type: nauc_map_at_3_max
value: 26.138491471124247
- type: nauc_map_at_3_std
value: 6.0487505175540734
- type: nauc_map_at_5_diff1
value: 36.817582976153695
- type: nauc_map_at_5_max
value: 25.398200211121146
- type: nauc_map_at_5_std
value: 6.31126763919522
- type: nauc_mrr_at_1000_diff1
value: 37.313544952847835
- type: nauc_mrr_at_1000_max
value: 26.96218532078988
- type: nauc_mrr_at_1000_std
value: 6.814359224654042
- type: nauc_mrr_at_100_diff1
value: 37.28104407653679
- type: nauc_mrr_at_100_max
value: 26.931243040477256
- type: nauc_mrr_at_100_std
value: 6.800500150841733
- type: nauc_mrr_at_10_diff1
value: 37.315832621275895
- type: nauc_mrr_at_10_max
value: 26.941454225978372
- type: nauc_mrr_at_10_std
value: 6.837046527796884
- type: nauc_mrr_at_1_diff1
value: 43.19904188582958
- type: nauc_mrr_at_1_max
value: 26.975620445904795
- type: nauc_mrr_at_1_std
value: 4.52071008581395
- type: nauc_mrr_at_20_diff1
value: 37.2200524790774
- type: nauc_mrr_at_20_max
value: 26.971494160765847
- type: nauc_mrr_at_20_std
value: 6.716431228783282
- type: nauc_mrr_at_3_diff1
value: 38.46236387340654
- type: nauc_mrr_at_3_max
value: 27.846812992192056
- type: nauc_mrr_at_3_std
value: 6.550711872569794
- type: nauc_mrr_at_5_diff1
value: 37.620346007658476
- type: nauc_mrr_at_5_max
value: 27.031025952102038
- type: nauc_mrr_at_5_std
value: 7.32343760231163
- type: nauc_ndcg_at_1000_diff1
value: 34.95081314840592
- type: nauc_ndcg_at_1000_max
value: 26.89265465124325
- type: nauc_ndcg_at_1000_std
value: 7.854154466831975
- type: nauc_ndcg_at_100_diff1
value: 34.01417812563093
- type: nauc_ndcg_at_100_max
value: 25.792737746436835
- type: nauc_ndcg_at_100_std
value: 7.726584165493833
- type: nauc_ndcg_at_10_diff1
value: 33.895122516474466
- type: nauc_ndcg_at_10_max
value: 25.388442204589612
- type: nauc_ndcg_at_10_std
value: 6.359560223645991
- type: nauc_ndcg_at_1_diff1
value: 43.19904188582958
- type: nauc_ndcg_at_1_max
value: 26.975620445904795
- type: nauc_ndcg_at_1_std
value: 4.52071008581395
- type: nauc_ndcg_at_20_diff1
value: 33.36078689830245
- type: nauc_ndcg_at_20_max
value: 25.531794610571563
- type: nauc_ndcg_at_20_std
value: 6.136658608653248
- type: nauc_ndcg_at_3_diff1
value: 36.44505602530781
- type: nauc_ndcg_at_3_max
value: 26.9104071983157
- type: nauc_ndcg_at_3_std
value: 6.427178520371878
- type: nauc_ndcg_at_5_diff1
value: 35.01384323197442
- type: nauc_ndcg_at_5_max
value: 25.5560447088692
- type: nauc_ndcg_at_5_std
value: 7.3676236760360485
- type: nauc_precision_at_1000_diff1
value: 2.8903331041804514
- type: nauc_precision_at_1000_max
value: 4.059662742366004
- type: nauc_precision_at_1000_std
value: -1.5891687644008334
- type: nauc_precision_at_100_diff1
value: 8.437726471693766
- type: nauc_precision_at_100_max
value: 11.250588557568427
- type: nauc_precision_at_100_std
value: 4.231571164627862
- type: nauc_precision_at_10_diff1
value: 19.57085237210294
- type: nauc_precision_at_10_max
value: 20.973093492003905
- type: nauc_precision_at_10_std
value: 3.197416248152466
- type: nauc_precision_at_1_diff1
value: 43.19904188582958
- type: nauc_precision_at_1_max
value: 26.975620445904795
- type: nauc_precision_at_1_std
value: 4.52071008581395
- type: nauc_precision_at_20_diff1
value: 15.67136554192724
- type: nauc_precision_at_20_max
value: 17.706882621057858
- type: nauc_precision_at_20_std
value: 1.9363472182867714
- type: nauc_precision_at_3_diff1
value: 30.38035695042325
- type: nauc_precision_at_3_max
value: 26.48218693244094
- type: nauc_precision_at_3_std
value: 6.424657705785632
- type: nauc_precision_at_5_diff1
value: 25.272543315171458
- type: nauc_precision_at_5_max
value: 22.32441421311652
- type: nauc_precision_at_5_std
value: 7.4912569081905716
- type: nauc_recall_at_1000_diff1
value: 25.5748044137675
- type: nauc_recall_at_1000_max
value: 43.85796585370269
- type: nauc_recall_at_1000_std
value: 30.0338086596789
- type: nauc_recall_at_100_diff1
value: 22.577080638885093
- type: nauc_recall_at_100_max
value: 23.224511700617477
- type: nauc_recall_at_100_std
value: 15.187963852289313
- type: nauc_recall_at_10_diff1
value: 25.058592299355908
- type: nauc_recall_at_10_max
value: 22.24448483279841
- type: nauc_recall_at_10_std
value: 6.3179089740052765
- type: nauc_recall_at_1_diff1
value: 42.890551563179976
- type: nauc_recall_at_1_max
value: 25.813805281076956
- type: nauc_recall_at_1_std
value: 5.150718386163028
- type: nauc_recall_at_20_diff1
value: 22.433865123187307
- type: nauc_recall_at_20_max
value: 22.739695641511762
- type: nauc_recall_at_20_std
value: 5.362005125538497
- type: nauc_recall_at_3_diff1
value: 32.17919168998616
- type: nauc_recall_at_3_max
value: 26.044028436867357
- type: nauc_recall_at_3_std
value: 7.420349884006329
- type: nauc_recall_at_5_diff1
value: 28.967104573649138
- type: nauc_recall_at_5_max
value: 23.40865848168201
- type: nauc_recall_at_5_std
value: 9.174406147723621
- type: ndcg_at_1
value: 23.756
- type: ndcg_at_10
value: 33.144
- type: ndcg_at_100
value: 39.261
- type: ndcg_at_1000
value: 41.881
- type: ndcg_at_20
value: 35.56
- type: ndcg_at_3
value: 27.927999999999997
- type: ndcg_at_5
value: 30.293999999999997
- type: precision_at_1
value: 23.756
- type: precision_at_10
value: 5.995
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.14100000000000001
- type: precision_at_20
value: 3.688
- type: precision_at_3
value: 13.059999999999999
- type: precision_at_5
value: 9.602
- type: recall_at_1
value: 19.41
- type: recall_at_10
value: 45.074
- type: recall_at_100
value: 71.131
- type: recall_at_1000
value: 89.604
- type: recall_at_20
value: 53.673
- type: recall_at_3
value: 31.055
- type: recall_at_5
value: 36.714999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 49.675000000000004
- type: map_at_1
value: 33.178999999999995
- type: map_at_10
value: 43.807
- type: map_at_100
value: 45.17
- type: map_at_1000
value: 45.271
- type: map_at_20
value: 44.516
- type: map_at_3
value: 40.813
- type: map_at_5
value: 42.457
- type: mrr_at_1
value: 40.32723772858518
- type: mrr_at_10
value: 49.646867409138814
- type: mrr_at_100
value: 50.493686101426285
- type: mrr_at_1000
value: 50.525386961808834
- type: mrr_at_20
value: 50.120274354884586
- type: mrr_at_3
value: 47.49759384023096
- type: mrr_at_5
value: 48.72473532242535
- type: nauc_map_at_1000_diff1
value: 49.5947127786396
- type: nauc_map_at_1000_max
value: 33.39720045844929
- type: nauc_map_at_1000_std
value: -3.131428593252271
- type: nauc_map_at_100_diff1
value: 49.57797867324617
- type: nauc_map_at_100_max
value: 33.356927974709464
- type: nauc_map_at_100_std
value: -3.1661365376766337
- type: nauc_map_at_10_diff1
value: 49.59294630598952
- type: nauc_map_at_10_max
value: 32.86647346990462
- type: nauc_map_at_10_std
value: -4.1582043443386745
- type: nauc_map_at_1_diff1
value: 53.98646767288695
- type: nauc_map_at_1_max
value: 29.45629077638936
- type: nauc_map_at_1_std
value: -5.621187380771589
- type: nauc_map_at_20_diff1
value: 49.486982890447074
- type: nauc_map_at_20_max
value: 33.11681933406332
- type: nauc_map_at_20_std
value: -3.5826433195146854
- type: nauc_map_at_3_diff1
value: 50.81807107491861
- type: nauc_map_at_3_max
value: 32.32552291988859
- type: nauc_map_at_3_std
value: -3.952946504088928
- type: nauc_map_at_5_diff1
value: 49.70201354274439
- type: nauc_map_at_5_max
value: 32.831846031004886
- type: nauc_map_at_5_std
value: -3.8330488624207737
- type: nauc_mrr_at_1000_diff1
value: 49.04159472507738
- type: nauc_mrr_at_1000_max
value: 35.617600171138676
- type: nauc_mrr_at_1000_std
value: -1.5975830757486646
- type: nauc_mrr_at_100_diff1
value: 49.03848471692094
- type: nauc_mrr_at_100_max
value: 35.61936748662614
- type: nauc_mrr_at_100_std
value: -1.5922053398594729
- type: nauc_mrr_at_10_diff1
value: 48.92463964652612
- type: nauc_mrr_at_10_max
value: 35.37757708992045
- type: nauc_mrr_at_10_std
value: -2.2052028139567303
- type: nauc_mrr_at_1_diff1
value: 52.23915787290734
- type: nauc_mrr_at_1_max
value: 34.393531787632334
- type: nauc_mrr_at_1_std
value: -1.452007661016969
- type: nauc_mrr_at_20_diff1
value: 48.91168438018404
- type: nauc_mrr_at_20_max
value: 35.478962544421876
- type: nauc_mrr_at_20_std
value: -1.8246048423555414
- type: nauc_mrr_at_3_diff1
value: 50.115432665442164
- type: nauc_mrr_at_3_max
value: 35.89093796085569
- type: nauc_mrr_at_3_std
value: -1.4895016313153366
- type: nauc_mrr_at_5_diff1
value: 49.04321261351915
- type: nauc_mrr_at_5_max
value: 35.85730520949451
- type: nauc_mrr_at_5_std
value: -1.68790556880753
- type: nauc_ndcg_at_1000_diff1
value: 48.294697499154374
- type: nauc_ndcg_at_1000_max
value: 35.167410242367595
- type: nauc_ndcg_at_1000_std
value: -0.6346078535914157
- type: nauc_ndcg_at_100_diff1
value: 48.025525283449014
- type: nauc_ndcg_at_100_max
value: 34.79288511776105
- type: nauc_ndcg_at_100_std
value: -0.7823403044086993
- type: nauc_ndcg_at_10_diff1
value: 47.70793258015258
- type: nauc_ndcg_at_10_max
value: 33.09558927880104
- type: nauc_ndcg_at_10_std
value: -4.7793864166260605
- type: nauc_ndcg_at_1_diff1
value: 52.23915787290734
- type: nauc_ndcg_at_1_max
value: 34.393531787632334
- type: nauc_ndcg_at_1_std
value: -1.452007661016969
- type: nauc_ndcg_at_20_diff1
value: 47.354286045074815
- type: nauc_ndcg_at_20_max
value: 33.686648806027975
- type: nauc_ndcg_at_20_std
value: -3.0189085132476556
- type: nauc_ndcg_at_3_diff1
value: 49.68805334316908
- type: nauc_ndcg_at_3_max
value: 34.196077748056496
- type: nauc_ndcg_at_3_std
value: -2.7167289163768436
- type: nauc_ndcg_at_5_diff1
value: 47.94474868912989
- type: nauc_ndcg_at_5_max
value: 34.00261603413051
- type: nauc_ndcg_at_5_std
value: -3.3541028103046115
- type: nauc_precision_at_1000_diff1
value: -12.0150100710755
- type: nauc_precision_at_1000_max
value: 5.332942816568796
- type: nauc_precision_at_1000_std
value: 14.543288479130458
- type: nauc_precision_at_100_diff1
value: -4.920332181588838
- type: nauc_precision_at_100_max
value: 14.42313332017491
- type: nauc_precision_at_100_std
value: 17.821953321018384
- type: nauc_precision_at_10_diff1
value: 14.70509089079217
- type: nauc_precision_at_10_max
value: 25.381887131649716
- type: nauc_precision_at_10_std
value: 5.226419288645675
- type: nauc_precision_at_1_diff1
value: 52.23915787290734
- type: nauc_precision_at_1_max
value: 34.393531787632334
- type: nauc_precision_at_1_std
value: -1.452007661016969
- type: nauc_precision_at_20_diff1
value: 6.312827641507564
- type: nauc_precision_at_20_max
value: 22.483038562271933
- type: nauc_precision_at_20_std
value: 11.368419856892416
- type: nauc_precision_at_3_diff1
value: 33.271443420273606
- type: nauc_precision_at_3_max
value: 33.571078182106675
- type: nauc_precision_at_3_std
value: 4.47382265155717
- type: nauc_precision_at_5_diff1
value: 23.43287104284656
- type: nauc_precision_at_5_max
value: 30.909085068105313
- type: nauc_precision_at_5_std
value: 5.545672049452433
- type: nauc_recall_at_1000_diff1
value: 35.22615594677707
- type: nauc_recall_at_1000_max
value: 52.0710533173532
- type: nauc_recall_at_1000_std
value: 45.17683523786464
- type: nauc_recall_at_100_diff1
value: 36.2169056956332
- type: nauc_recall_at_100_max
value: 35.02435003210817
- type: nauc_recall_at_100_std
value: 15.833632946282508
- type: nauc_recall_at_10_diff1
value: 39.12440292974848
- type: nauc_recall_at_10_max
value: 28.0546011979648
- type: nauc_recall_at_10_std
value: -9.620558638092172
- type: nauc_recall_at_1_diff1
value: 53.98646767288695
- type: nauc_recall_at_1_max
value: 29.45629077638936
- type: nauc_recall_at_1_std
value: -5.621187380771589
- type: nauc_recall_at_20_diff1
value: 36.39254630768161
- type: nauc_recall_at_20_max
value: 29.277856508751967
- type: nauc_recall_at_20_std
value: -3.048007490798412
- type: nauc_recall_at_3_diff1
value: 45.64706642644958
- type: nauc_recall_at_3_max
value: 31.003050159737413
- type: nauc_recall_at_3_std
value: -4.849763876930667
- type: nauc_recall_at_5_diff1
value: 40.918108859971746
- type: nauc_recall_at_5_max
value: 30.69907335071493
- type: nauc_recall_at_5_std
value: -6.1445436251916865
- type: ndcg_at_1
value: 40.327
- type: ndcg_at_10
value: 49.675000000000004
- type: ndcg_at_100
value: 55.364000000000004
- type: ndcg_at_1000
value: 56.992
- type: ndcg_at_20
value: 51.803999999999995
- type: ndcg_at_3
value: 45.227000000000004
- type: ndcg_at_5
value: 47.244
- type: precision_at_1
value: 40.327
- type: precision_at_10
value: 8.826
- type: precision_at_100
value: 1.354
- type: precision_at_1000
value: 0.167
- type: precision_at_20
value: 5.115
- type: precision_at_3
value: 21.303
- type: precision_at_5
value: 14.726
- type: recall_at_1
value: 33.178999999999995
- type: recall_at_10
value: 61.087
- type: recall_at_100
value: 85.099
- type: recall_at_1000
value: 95.14099999999999
- type: recall_at_20
value: 68.623
- type: recall_at_3
value: 48.245
- type: recall_at_5
value: 53.832
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 44.99
- type: map_at_1
value: 28.089
- type: map_at_10
value: 38.98
- type: map_at_100
value: 40.339000000000006
- type: map_at_1000
value: 40.441
- type: map_at_20
value: 39.702
- type: map_at_3
value: 35.620000000000005
- type: map_at_5
value: 37.657000000000004
- type: mrr_at_1
value: 35.15981735159817
- type: mrr_at_10
value: 44.54075161266937
- type: mrr_at_100
value: 45.435730392436646
- type: mrr_at_1000
value: 45.47673849356812
- type: mrr_at_20
value: 45.05949613726918
- type: mrr_at_3
value: 42.00913242009131
- type: mrr_at_5
value: 43.52739726027392
- type: nauc_map_at_1000_diff1
value: 42.6375513442399
- type: nauc_map_at_1000_max
value: 35.83899956589522
- type: nauc_map_at_1000_std
value: 5.798620017712549
- type: nauc_map_at_100_diff1
value: 42.609712253881504
- type: nauc_map_at_100_max
value: 35.85401871065736
- type: nauc_map_at_100_std
value: 5.829007296755533
- type: nauc_map_at_10_diff1
value: 42.90931172127824
- type: nauc_map_at_10_max
value: 35.46694204511423
- type: nauc_map_at_10_std
value: 5.131477704152026
- type: nauc_map_at_1_diff1
value: 48.066312177855956
- type: nauc_map_at_1_max
value: 30.67745267941573
- type: nauc_map_at_1_std
value: -1.4170737991670943
- type: nauc_map_at_20_diff1
value: 42.730423700784
- type: nauc_map_at_20_max
value: 35.710039616497085
- type: nauc_map_at_20_std
value: 5.363961887475162
- type: nauc_map_at_3_diff1
value: 43.499223646579935
- type: nauc_map_at_3_max
value: 33.872570039621564
- type: nauc_map_at_3_std
value: 3.0787571843453008
- type: nauc_map_at_5_diff1
value: 43.28963642946521
- type: nauc_map_at_5_max
value: 35.18327408279892
- type: nauc_map_at_5_std
value: 4.516467154662473
- type: nauc_mrr_at_1000_diff1
value: 42.71279871641341
- type: nauc_mrr_at_1000_max
value: 37.48825064817496
- type: nauc_mrr_at_1000_std
value: 8.10015025024314
- type: nauc_mrr_at_100_diff1
value: 42.694777404773376
- type: nauc_mrr_at_100_max
value: 37.476741768741086
- type: nauc_mrr_at_100_std
value: 8.11525130417229
- type: nauc_mrr_at_10_diff1
value: 42.954194054560176
- type: nauc_mrr_at_10_max
value: 37.606138578797506
- type: nauc_mrr_at_10_std
value: 8.092519513302399
- type: nauc_mrr_at_1_diff1
value: 48.350790286038574
- type: nauc_mrr_at_1_max
value: 33.97992759739641
- type: nauc_mrr_at_1_std
value: 1.8332987018664093
- type: nauc_mrr_at_20_diff1
value: 42.664983701783044
- type: nauc_mrr_at_20_max
value: 37.47450702110784
- type: nauc_mrr_at_20_std
value: 8.001067634745462
- type: nauc_mrr_at_3_diff1
value: 42.921968602737955
- type: nauc_mrr_at_3_max
value: 37.19599728791262
- type: nauc_mrr_at_3_std
value: 7.4692697422507575
- type: nauc_mrr_at_5_diff1
value: 42.96028546491891
- type: nauc_mrr_at_5_max
value: 37.688350071295915
- type: nauc_mrr_at_5_std
value: 8.213017954012372
- type: nauc_ndcg_at_1000_diff1
value: 40.70763263942397
- type: nauc_ndcg_at_1000_max
value: 37.87768319167602
- type: nauc_ndcg_at_1000_std
value: 9.908807071686738
- type: nauc_ndcg_at_100_diff1
value: 39.97828438221707
- type: nauc_ndcg_at_100_max
value: 37.7723393835996
- type: nauc_ndcg_at_100_std
value: 10.666779466040097
- type: nauc_ndcg_at_10_diff1
value: 41.172233451172936
- type: nauc_ndcg_at_10_max
value: 37.12252131573939
- type: nauc_ndcg_at_10_std
value: 8.273798754436639
- type: nauc_ndcg_at_1_diff1
value: 48.350790286038574
- type: nauc_ndcg_at_1_max
value: 33.97992759739641
- type: nauc_ndcg_at_1_std
value: 1.8332987018664093
- type: nauc_ndcg_at_20_diff1
value: 40.33325895172716
- type: nauc_ndcg_at_20_max
value: 37.36015594019951
- type: nauc_ndcg_at_20_std
value: 8.818556108749302
- type: nauc_ndcg_at_3_diff1
value: 41.652701699747254
- type: nauc_ndcg_at_3_max
value: 35.499109874223294
- type: nauc_ndcg_at_3_std
value: 5.831784865606119
- type: nauc_ndcg_at_5_diff1
value: 41.856346892595475
- type: nauc_ndcg_at_5_max
value: 36.940681835687194
- type: nauc_ndcg_at_5_std
value: 7.507798515093516
- type: nauc_precision_at_1000_diff1
value: -2.4605367806784866
- type: nauc_precision_at_1000_max
value: -0.3538142127162922
- type: nauc_precision_at_1000_std
value: 8.369794961833236
- type: nauc_precision_at_100_diff1
value: -0.34954522096524704
- type: nauc_precision_at_100_max
value: 13.159909603146458
- type: nauc_precision_at_100_std
value: 19.425561514133996
- type: nauc_precision_at_10_diff1
value: 17.048304710148145
- type: nauc_precision_at_10_max
value: 29.816041846806375
- type: nauc_precision_at_10_std
value: 18.358893367243798
- type: nauc_precision_at_1_diff1
value: 48.350790286038574
- type: nauc_precision_at_1_max
value: 33.97992759739641
- type: nauc_precision_at_1_std
value: 1.8332987018664093
- type: nauc_precision_at_20_diff1
value: 10.450903599411344
- type: nauc_precision_at_20_max
value: 25.228916373799127
- type: nauc_precision_at_20_std
value: 18.46893569529936
- type: nauc_precision_at_3_diff1
value: 29.181236567048636
- type: nauc_precision_at_3_max
value: 35.64918262500281
- type: nauc_precision_at_3_std
value: 13.347538222514968
- type: nauc_precision_at_5_diff1
value: 23.693323840550345
- type: nauc_precision_at_5_max
value: 33.972399735191225
- type: nauc_precision_at_5_std
value: 17.107012760554618
- type: nauc_recall_at_1000_diff1
value: 20.297340483227945
- type: nauc_recall_at_1000_max
value: 63.084305970127275
- type: nauc_recall_at_1000_std
value: 63.04655000858784
- type: nauc_recall_at_100_diff1
value: 22.587332148979723
- type: nauc_recall_at_100_max
value: 40.740968468024775
- type: nauc_recall_at_100_std
value: 34.120423684507124
- type: nauc_recall_at_10_diff1
value: 33.361195948673675
- type: nauc_recall_at_10_max
value: 37.1411402410262
- type: nauc_recall_at_10_std
value: 13.475407196166259
- type: nauc_recall_at_1_diff1
value: 48.066312177855956
- type: nauc_recall_at_1_max
value: 30.67745267941573
- type: nauc_recall_at_1_std
value: -1.4170737991670943
- type: nauc_recall_at_20_diff1
value: 28.703982984383984
- type: nauc_recall_at_20_max
value: 37.32929431193496
- type: nauc_recall_at_20_std
value: 16.139135347989903
- type: nauc_recall_at_3_diff1
value: 36.53346179134789
- type: nauc_recall_at_3_max
value: 34.11397914899309
- type: nauc_recall_at_3_std
value: 7.19358019807132
- type: nauc_recall_at_5_diff1
value: 36.24058894947452
- type: nauc_recall_at_5_max
value: 37.00990358651097
- type: nauc_recall_at_5_std
value: 11.074645476821619
- type: ndcg_at_1
value: 35.160000000000004
- type: ndcg_at_10
value: 44.99
- type: ndcg_at_100
value: 50.661
- type: ndcg_at_1000
value: 52.599
- type: ndcg_at_20
value: 47.154
- type: ndcg_at_3
value: 39.843
- type: ndcg_at_5
value: 42.486000000000004
- type: precision_at_1
value: 35.160000000000004
- type: precision_at_10
value: 8.299
- type: precision_at_100
value: 1.2850000000000001
- type: precision_at_1000
value: 0.16199999999999998
- type: precision_at_20
value: 4.84
- type: precision_at_3
value: 19.178
- type: precision_at_5
value: 13.927
- type: recall_at_1
value: 28.089
- type: recall_at_10
value: 57.158
- type: recall_at_100
value: 81.461
- type: recall_at_1000
value: 94.46900000000001
- type: recall_at_20
value: 64.927
- type: recall_at_3
value: 42.775999999999996
- type: recall_at_5
value: 49.719
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: CQADupstackRetrieval is a combined dataset
metrics:
- type: main_score
value: 44.989166666666655
- type: ndcg_at_10
value: 44.989166666666655
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 39.586
- type: map_at_1
value: 27.301
- type: map_at_10
value: 35.022
- type: map_at_100
value: 36.061
- type: map_at_1000
value: 36.146
- type: map_at_20
value: 35.608000000000004
- type: map_at_3
value: 32.978
- type: map_at_5
value: 33.994
- type: mrr_at_1
value: 30.67484662576687
- type: mrr_at_10
value: 38.1696124257474
- type: mrr_at_100
value: 38.99730898994137
- type: mrr_at_1000
value: 39.049871007408136
- type: mrr_at_20
value: 38.62424051396064
- type: mrr_at_3
value: 36.40081799591004
- type: mrr_at_5
value: 37.23670756646219
- type: nauc_map_at_1000_diff1
value: 50.4395097150819
- type: nauc_map_at_1000_max
value: 42.36231476768413
- type: nauc_map_at_1000_std
value: 1.0739414045485742
- type: nauc_map_at_100_diff1
value: 50.4253775421283
- type: nauc_map_at_100_max
value: 42.34508969348633
- type: nauc_map_at_100_std
value: 1.0590256535050135
- type: nauc_map_at_10_diff1
value: 50.74196619464362
- type: nauc_map_at_10_max
value: 42.354326434590284
- type: nauc_map_at_10_std
value: 0.6330167542705694
- type: nauc_map_at_1_diff1
value: 55.7404810490963
- type: nauc_map_at_1_max
value: 40.7676941648045
- type: nauc_map_at_1_std
value: -5.021772566610674
- type: nauc_map_at_20_diff1
value: 50.39792463598886
- type: nauc_map_at_20_max
value: 42.25768760228577
- type: nauc_map_at_20_std
value: 0.8979017700131807
- type: nauc_map_at_3_diff1
value: 51.53267996170815
- type: nauc_map_at_3_max
value: 41.78801756883417
- type: nauc_map_at_3_std
value: -0.6652383024396911
- type: nauc_map_at_5_diff1
value: 50.992783683271504
- type: nauc_map_at_5_max
value: 41.8607977828188
- type: nauc_map_at_5_std
value: 0.3484379897869807
- type: nauc_mrr_at_1000_diff1
value: 48.952907124445126
- type: nauc_mrr_at_1000_max
value: 42.93563741482114
- type: nauc_mrr_at_1000_std
value: 3.0791495753556424
- type: nauc_mrr_at_100_diff1
value: 48.941921107360805
- type: nauc_mrr_at_100_max
value: 42.94419657374061
- type: nauc_mrr_at_100_std
value: 3.075397087180154
- type: nauc_mrr_at_10_diff1
value: 49.098926306303056
- type: nauc_mrr_at_10_max
value: 42.941857820499806
- type: nauc_mrr_at_10_std
value: 2.8184474174054372
- type: nauc_mrr_at_1_diff1
value: 54.428109877009334
- type: nauc_mrr_at_1_max
value: 42.50273386972492
- type: nauc_mrr_at_1_std
value: -2.1811826216412187
- type: nauc_mrr_at_20_diff1
value: 48.82502192775839
- type: nauc_mrr_at_20_max
value: 42.92227277257095
- type: nauc_mrr_at_20_std
value: 2.975812634368533
- type: nauc_mrr_at_3_diff1
value: 49.440009227591176
- type: nauc_mrr_at_3_max
value: 42.95503176290712
- type: nauc_mrr_at_3_std
value: 2.2997128945013796
- type: nauc_mrr_at_5_diff1
value: 49.09846782701398
- type: nauc_mrr_at_5_max
value: 42.51449168285772
- type: nauc_mrr_at_5_std
value: 2.7785816484421297
- type: nauc_ndcg_at_1000_diff1
value: 48.14680758187888
- type: nauc_ndcg_at_1000_max
value: 43.57465718500695
- type: nauc_ndcg_at_1000_std
value: 5.287435676678261
- type: nauc_ndcg_at_100_diff1
value: 47.66081605743284
- type: nauc_ndcg_at_100_max
value: 43.28156751251163
- type: nauc_ndcg_at_100_std
value: 4.959626409663624
- type: nauc_ndcg_at_10_diff1
value: 48.25075619623878
- type: nauc_ndcg_at_10_max
value: 43.00688660666578
- type: nauc_ndcg_at_10_std
value: 3.2319193368891637
- type: nauc_ndcg_at_1_diff1
value: 54.428109877009334
- type: nauc_ndcg_at_1_max
value: 42.50273386972492
- type: nauc_ndcg_at_1_std
value: -2.1811826216412187
- type: nauc_ndcg_at_20_diff1
value: 47.1943098627403
- type: nauc_ndcg_at_20_max
value: 42.86954491768707
- type: nauc_ndcg_at_20_std
value: 4.08583080150737
- type: nauc_ndcg_at_3_diff1
value: 49.32681523192246
- type: nauc_ndcg_at_3_max
value: 42.46898641470274
- type: nauc_ndcg_at_3_std
value: 1.7416962407725236
- type: nauc_ndcg_at_5_diff1
value: 48.59647012439291
- type: nauc_ndcg_at_5_max
value: 42.07098889846439
- type: nauc_ndcg_at_5_std
value: 2.979621233356828
- type: nauc_precision_at_1000_diff1
value: -1.7366334161587105
- type: nauc_precision_at_1000_max
value: 17.70969166396819
- type: nauc_precision_at_1000_std
value: 17.50619975322144
- type: nauc_precision_at_100_diff1
value: 10.082579982582155
- type: nauc_precision_at_100_max
value: 28.024893516091776
- type: nauc_precision_at_100_std
value: 18.41413013357596
- type: nauc_precision_at_10_diff1
value: 28.796167732373657
- type: nauc_precision_at_10_max
value: 40.37340024485382
- type: nauc_precision_at_10_std
value: 13.718572711091733
- type: nauc_precision_at_1_diff1
value: 54.428109877009334
- type: nauc_precision_at_1_max
value: 42.50273386972492
- type: nauc_precision_at_1_std
value: -2.1811826216412187
- type: nauc_precision_at_20_diff1
value: 19.82691920771315
- type: nauc_precision_at_20_max
value: 34.45075390159975
- type: nauc_precision_at_20_std
value: 16.410812072348058
- type: nauc_precision_at_3_diff1
value: 40.85430254962678
- type: nauc_precision_at_3_max
value: 43.63016056067074
- type: nauc_precision_at_3_std
value: 9.322014634477581
- type: nauc_precision_at_5_diff1
value: 35.830272848975795
- type: nauc_precision_at_5_max
value: 41.30047691620363
- type: nauc_precision_at_5_std
value: 13.145693992266565
- type: nauc_recall_at_1000_diff1
value: 35.532000545890504
- type: nauc_recall_at_1000_max
value: 50.714223194510325
- type: nauc_recall_at_1000_std
value: 43.09037309139045
- type: nauc_recall_at_100_diff1
value: 35.11024488875192
- type: nauc_recall_at_100_max
value: 43.0874566265193
- type: nauc_recall_at_100_std
value: 19.70628521846854
- type: nauc_recall_at_10_diff1
value: 40.36203726741153
- type: nauc_recall_at_10_max
value: 42.581482582576726
- type: nauc_recall_at_10_std
value: 8.642553371022348
- type: nauc_recall_at_1_diff1
value: 55.7404810490963
- type: nauc_recall_at_1_max
value: 40.7676941648045
- type: nauc_recall_at_1_std
value: -5.021772566610674
- type: nauc_recall_at_20_diff1
value: 35.97348868186562
- type: nauc_recall_at_20_max
value: 41.82695933305065
- type: nauc_recall_at_20_std
value: 11.444957541593585
- type: nauc_recall_at_3_diff1
value: 44.20020470014979
- type: nauc_recall_at_3_max
value: 40.84130855296979
- type: nauc_recall_at_3_std
value: 5.004883338558809
- type: nauc_recall_at_5_diff1
value: 42.08756885472078
- type: nauc_recall_at_5_max
value: 39.90323783606852
- type: nauc_recall_at_5_std
value: 8.085182534171127
- type: ndcg_at_1
value: 30.675
- type: ndcg_at_10
value: 39.586
- type: ndcg_at_100
value: 44.737
- type: ndcg_at_1000
value: 46.863
- type: ndcg_at_20
value: 41.495
- type: ndcg_at_3
value: 35.8
- type: ndcg_at_5
value: 37.3
- type: precision_at_1
value: 30.675
- type: precision_at_10
value: 6.196
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.122
- type: precision_at_20
value: 3.6350000000000002
- type: precision_at_3
value: 15.337
- type: precision_at_5
value: 10.337
- type: recall_at_1
value: 27.301
- type: recall_at_10
value: 50.346999999999994
- type: recall_at_100
value: 74.459
- type: recall_at_1000
value: 90.018
- type: recall_at_20
value: 57.473
- type: recall_at_3
value: 39.672000000000004
- type: recall_at_5
value: 43.383
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 32.842
- type: map_at_1
value: 19.527
- type: map_at_10
value: 27.711999999999996
- type: map_at_100
value: 28.98
- type: map_at_1000
value: 29.108
- type: map_at_20
value: 28.407
- type: map_at_3
value: 25.023
- type: map_at_5
value: 26.528000000000002
- type: mrr_at_1
value: 23.675154852030282
- type: mrr_at_10
value: 31.810676323752784
- type: mrr_at_100
value: 32.788970614380716
- type: mrr_at_1000
value: 32.86028758975889
- type: mrr_at_20
value: 32.35935756676056
- type: mrr_at_3
value: 29.41615049323246
- type: mrr_at_5
value: 30.785730672172633
- type: nauc_map_at_1000_diff1
value: 35.597766688968015
- type: nauc_map_at_1000_max
value: 26.295790183159845
- type: nauc_map_at_1000_std
value: -0.04229904865958209
- type: nauc_map_at_100_diff1
value: 35.568782622469925
- type: nauc_map_at_100_max
value: 26.27850795471227
- type: nauc_map_at_100_std
value: -0.04944875782811099
- type: nauc_map_at_10_diff1
value: 35.63760937893694
- type: nauc_map_at_10_max
value: 26.130094042028233
- type: nauc_map_at_10_std
value: -0.6896882769027717
- type: nauc_map_at_1_diff1
value: 41.759098341890976
- type: nauc_map_at_1_max
value: 23.918885427783326
- type: nauc_map_at_1_std
value: -2.1383574897865074
- type: nauc_map_at_20_diff1
value: 35.55706530442612
- type: nauc_map_at_20_max
value: 26.23339626569677
- type: nauc_map_at_20_std
value: -0.162172033918129
- type: nauc_map_at_3_diff1
value: 37.22183376355153
- type: nauc_map_at_3_max
value: 25.770512522122186
- type: nauc_map_at_3_std
value: -1.3105892187778403
- type: nauc_map_at_5_diff1
value: 36.205913161663084
- type: nauc_map_at_5_max
value: 25.953300641502064
- type: nauc_map_at_5_std
value: -0.7987363137547906
- type: nauc_mrr_at_1000_diff1
value: 34.864016559617646
- type: nauc_mrr_at_1000_max
value: 26.8689525348564
- type: nauc_mrr_at_1000_std
value: -0.5839923973914446
- type: nauc_mrr_at_100_diff1
value: 34.83820469598538
- type: nauc_mrr_at_100_max
value: 26.864669056231282
- type: nauc_mrr_at_100_std
value: -0.5785645654158633
- type: nauc_mrr_at_10_diff1
value: 34.81868397381981
- type: nauc_mrr_at_10_max
value: 26.79988560460627
- type: nauc_mrr_at_10_std
value: -1.1113808365827318
- type: nauc_mrr_at_1_diff1
value: 40.0281507903504
- type: nauc_mrr_at_1_max
value: 25.036735941806583
- type: nauc_mrr_at_1_std
value: -2.508700799268523
- type: nauc_mrr_at_20_diff1
value: 34.81954537357966
- type: nauc_mrr_at_20_max
value: 26.877673033315453
- type: nauc_mrr_at_20_std
value: -0.6706028107452919
- type: nauc_mrr_at_3_diff1
value: 35.87313782549696
- type: nauc_mrr_at_3_max
value: 26.776261693392335
- type: nauc_mrr_at_3_std
value: -1.8010591328112908
- type: nauc_mrr_at_5_diff1
value: 35.31673912159536
- type: nauc_mrr_at_5_max
value: 26.78720786106881
- type: nauc_mrr_at_5_std
value: -1.3096326953900546
- type: nauc_ndcg_at_1000_diff1
value: 33.43105244339048
- type: nauc_ndcg_at_1000_max
value: 27.52195065724684
- type: nauc_ndcg_at_1000_std
value: 2.8376056562675744
- type: nauc_ndcg_at_100_diff1
value: 32.90916846420573
- type: nauc_ndcg_at_100_max
value: 27.27161017736065
- type: nauc_ndcg_at_100_std
value: 2.8703122625872126
- type: nauc_ndcg_at_10_diff1
value: 33.12714979317447
- type: nauc_ndcg_at_10_max
value: 26.67762031747992
- type: nauc_ndcg_at_10_std
value: -0.1341345572932233
- type: nauc_ndcg_at_1_diff1
value: 40.0281507903504
- type: nauc_ndcg_at_1_max
value: 25.036735941806583
- type: nauc_ndcg_at_1_std
value: -2.508700799268523
- type: nauc_ndcg_at_20_diff1
value: 32.891656138688546
- type: nauc_ndcg_at_20_max
value: 26.991976404027163
- type: nauc_ndcg_at_20_std
value: 1.6050741106677746
- type: nauc_ndcg_at_3_diff1
value: 35.576958713955484
- type: nauc_ndcg_at_3_max
value: 26.41687745899445
- type: nauc_ndcg_at_3_std
value: -1.5326687067002291
- type: nauc_ndcg_at_5_diff1
value: 34.27335619067276
- type: nauc_ndcg_at_5_max
value: 26.479515412084208
- type: nauc_ndcg_at_5_std
value: -0.5597648935666003
- type: nauc_precision_at_1000_diff1
value: -0.18660914306684007
- type: nauc_precision_at_1000_max
value: 7.268255385799229
- type: nauc_precision_at_1000_std
value: -0.1968875268478991
- type: nauc_precision_at_100_diff1
value: 7.386701205054449
- type: nauc_precision_at_100_max
value: 15.477735603019607
- type: nauc_precision_at_100_std
value: 4.753153414679307
- type: nauc_precision_at_10_diff1
value: 18.4668296945938
- type: nauc_precision_at_10_max
value: 25.457144217779597
- type: nauc_precision_at_10_std
value: 0.40165373733963605
- type: nauc_precision_at_1_diff1
value: 40.0281507903504
- type: nauc_precision_at_1_max
value: 25.036735941806583
- type: nauc_precision_at_1_std
value: -2.508700799268523
- type: nauc_precision_at_20_diff1
value: 14.751135844289335
- type: nauc_precision_at_20_max
value: 22.763373329576293
- type: nauc_precision_at_20_std
value: 4.360731801761864
- type: nauc_precision_at_3_diff1
value: 28.154753888265393
- type: nauc_precision_at_3_max
value: 27.838427033527147
- type: nauc_precision_at_3_std
value: -1.0042621266717804
- type: nauc_precision_at_5_diff1
value: 23.549026872711423
- type: nauc_precision_at_5_max
value: 27.192214745385044
- type: nauc_precision_at_5_std
value: 0.4455206110174471
- type: nauc_recall_at_1000_diff1
value: 17.905404210815632
- type: nauc_recall_at_1000_max
value: 32.8674418535776
- type: nauc_recall_at_1000_std
value: 35.187050415735435
- type: nauc_recall_at_100_diff1
value: 20.903609751984757
- type: nauc_recall_at_100_max
value: 27.180306691518364
- type: nauc_recall_at_100_std
value: 17.553030959393297
- type: nauc_recall_at_10_diff1
value: 25.615147693464387
- type: nauc_recall_at_10_max
value: 25.97062699453565
- type: nauc_recall_at_10_std
value: 2.2181702899826576
- type: nauc_recall_at_1_diff1
value: 41.759098341890976
- type: nauc_recall_at_1_max
value: 23.918885427783326
- type: nauc_recall_at_1_std
value: -2.1383574897865074
- type: nauc_recall_at_20_diff1
value: 23.922775940094386
- type: nauc_recall_at_20_max
value: 26.384627814902785
- type: nauc_recall_at_20_std
value: 7.944532403561578
- type: nauc_recall_at_3_diff1
value: 32.26543270634743
- type: nauc_recall_at_3_max
value: 26.36357710828272
- type: nauc_recall_at_3_std
value: -0.42723331708340706
- type: nauc_recall_at_5_diff1
value: 29.080464141763336
- type: nauc_recall_at_5_max
value: 25.81238438303652
- type: nauc_recall_at_5_std
value: 1.1649311168287726
- type: ndcg_at_1
value: 23.674999999999997
- type: ndcg_at_10
value: 32.842
- type: ndcg_at_100
value: 38.64
- type: ndcg_at_1000
value: 41.367
- type: ndcg_at_20
value: 35.032999999999994
- type: ndcg_at_3
value: 28.166000000000004
- type: ndcg_at_5
value: 30.407
- type: precision_at_1
value: 23.674999999999997
- type: precision_at_10
value: 6.005
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 3.6580000000000004
- type: precision_at_3
value: 13.352
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 19.527
- type: recall_at_10
value: 44.096999999999994
- type: recall_at_100
value: 69.962
- type: recall_at_1000
value: 89.035
- type: recall_at_20
value: 52.166000000000004
- type: recall_at_3
value: 30.946
- type: recall_at_5
value: 36.789
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 46.54
- type: map_at_1
value: 29.953999999999997
- type: map_at_10
value: 40.742
- type: map_at_100
value: 41.964
- type: map_at_1000
value: 42.059999999999995
- type: map_at_20
value: 41.426
- type: map_at_3
value: 37.378
- type: map_at_5
value: 39.267
- type: mrr_at_1
value: 34.701492537313435
- type: mrr_at_10
value: 44.29978085761664
- type: mrr_at_100
value: 45.205551401915486
- type: mrr_at_1000
value: 45.24735017384963
- type: mrr_at_20
value: 44.85338423755729
- type: mrr_at_3
value: 41.57338308457707
- type: mrr_at_5
value: 43.19185323383077
- type: nauc_map_at_1000_diff1
value: 48.45170522932164
- type: nauc_map_at_1000_max
value: 31.544164363591204
- type: nauc_map_at_1000_std
value: 0.8661088818146858
- type: nauc_map_at_100_diff1
value: 48.47347800061323
- type: nauc_map_at_100_max
value: 31.568637596620313
- type: nauc_map_at_100_std
value: 0.9252699336843858
- type: nauc_map_at_10_diff1
value: 48.64849891585432
- type: nauc_map_at_10_max
value: 31.40371265579746
- type: nauc_map_at_10_std
value: 0.7088016563713089
- type: nauc_map_at_1_diff1
value: 53.57918993108331
- type: nauc_map_at_1_max
value: 31.392632653740993
- type: nauc_map_at_1_std
value: -2.857306170463933
- type: nauc_map_at_20_diff1
value: 48.49084353023969
- type: nauc_map_at_20_max
value: 31.470313174779374
- type: nauc_map_at_20_std
value: 0.8950296035234309
- type: nauc_map_at_3_diff1
value: 49.273481161619806
- type: nauc_map_at_3_max
value: 31.101471509782826
- type: nauc_map_at_3_std
value: -0.886510096257905
- type: nauc_map_at_5_diff1
value: 48.85344288229106
- type: nauc_map_at_5_max
value: 31.32633663238284
- type: nauc_map_at_5_std
value: -0.44752909698881177
- type: nauc_mrr_at_1000_diff1
value: 46.27593166906613
- type: nauc_mrr_at_1000_max
value: 31.637594372116336
- type: nauc_mrr_at_1000_std
value: 0.8444917550670064
- type: nauc_mrr_at_100_diff1
value: 46.27161543033672
- type: nauc_mrr_at_100_max
value: 31.64330655339695
- type: nauc_mrr_at_100_std
value: 0.8717446416398773
- type: nauc_mrr_at_10_diff1
value: 46.100348481312864
- type: nauc_mrr_at_10_max
value: 31.594271897882237
- type: nauc_mrr_at_10_std
value: 0.8807168907688873
- type: nauc_mrr_at_1_diff1
value: 51.35163098909763
- type: nauc_mrr_at_1_max
value: 31.99084441327899
- type: nauc_mrr_at_1_std
value: -2.688594880742662
- type: nauc_mrr_at_20_diff1
value: 46.18178546174727
- type: nauc_mrr_at_20_max
value: 31.639111674119448
- type: nauc_mrr_at_20_std
value: 0.9855008641374622
- type: nauc_mrr_at_3_diff1
value: 46.307484835305864
- type: nauc_mrr_at_3_max
value: 31.35563850804847
- type: nauc_mrr_at_3_std
value: -0.3419536587707561
- type: nauc_mrr_at_5_diff1
value: 46.17646418781234
- type: nauc_mrr_at_5_max
value: 31.313474270239833
- type: nauc_mrr_at_5_std
value: -0.08656550526568331
- type: nauc_ndcg_at_1000_diff1
value: 46.12095795101613
- type: nauc_ndcg_at_1000_max
value: 31.989083597726314
- type: nauc_ndcg_at_1000_std
value: 3.2965704707660763
- type: nauc_ndcg_at_100_diff1
value: 46.05376249841318
- type: nauc_ndcg_at_100_max
value: 32.39195988574972
- type: nauc_ndcg_at_100_std
value: 4.518018135593347
- type: nauc_ndcg_at_10_diff1
value: 46.133631183744875
- type: nauc_ndcg_at_10_max
value: 31.45358876172339
- type: nauc_ndcg_at_10_std
value: 3.4254370918871055
- type: nauc_ndcg_at_1_diff1
value: 51.35163098909763
- type: nauc_ndcg_at_1_max
value: 31.99084441327899
- type: nauc_ndcg_at_1_std
value: -2.688594880742662
- type: nauc_ndcg_at_20_diff1
value: 45.94584949766954
- type: nauc_ndcg_at_20_max
value: 31.689777515111295
- type: nauc_ndcg_at_20_std
value: 4.189082428922442
- type: nauc_ndcg_at_3_diff1
value: 46.5057835389752
- type: nauc_ndcg_at_3_max
value: 30.941407592082047
- type: nauc_ndcg_at_3_std
value: -0.042473944857831535
- type: nauc_ndcg_at_5_diff1
value: 46.369027395136136
- type: nauc_ndcg_at_5_max
value: 31.057841776505352
- type: nauc_ndcg_at_5_std
value: 0.6878993420489522
- type: nauc_precision_at_1000_diff1
value: -17.30759714093202
- type: nauc_precision_at_1000_max
value: -4.441155558458858
- type: nauc_precision_at_1000_std
value: 1.5537300718220326
- type: nauc_precision_at_100_diff1
value: -7.18920438222021
- type: nauc_precision_at_100_max
value: 8.017878121399253
- type: nauc_precision_at_100_std
value: 11.357132919349102
- type: nauc_precision_at_10_diff1
value: 15.202451884794076
- type: nauc_precision_at_10_max
value: 19.077295902881417
- type: nauc_precision_at_10_std
value: 9.885526867355805
- type: nauc_precision_at_1_diff1
value: 51.35163098909763
- type: nauc_precision_at_1_max
value: 31.99084441327899
- type: nauc_precision_at_1_std
value: -2.688594880742662
- type: nauc_precision_at_20_diff1
value: 6.827461091494899
- type: nauc_precision_at_20_max
value: 15.27268633497114
- type: nauc_precision_at_20_std
value: 11.515826649647384
- type: nauc_precision_at_3_diff1
value: 31.043021807472027
- type: nauc_precision_at_3_max
value: 26.22457157531548
- type: nauc_precision_at_3_std
value: 1.788215968301994
- type: nauc_precision_at_5_diff1
value: 25.030185818513235
- type: nauc_precision_at_5_max
value: 23.680129160901537
- type: nauc_precision_at_5_std
value: 4.303018899688115
- type: nauc_recall_at_1000_diff1
value: 28.68826642607512
- type: nauc_recall_at_1000_max
value: 42.33849804103852
- type: nauc_recall_at_1000_std
value: 42.67413575876864
- type: nauc_recall_at_100_diff1
value: 36.51494878715
- type: nauc_recall_at_100_max
value: 37.4764995034434
- type: nauc_recall_at_100_std
value: 28.295671266661017
- type: nauc_recall_at_10_diff1
value: 39.416721111463524
- type: nauc_recall_at_10_max
value: 29.95985608454179
- type: nauc_recall_at_10_std
value: 12.423335839786201
- type: nauc_recall_at_1_diff1
value: 53.57918993108331
- type: nauc_recall_at_1_max
value: 31.392632653740993
- type: nauc_recall_at_1_std
value: -2.857306170463933
- type: nauc_recall_at_20_diff1
value: 38.228803480194046
- type: nauc_recall_at_20_max
value: 30.87261362975955
- type: nauc_recall_at_20_std
value: 16.977113091834095
- type: nauc_recall_at_3_diff1
value: 43.154348566653155
- type: nauc_recall_at_3_max
value: 29.54536633744803
- type: nauc_recall_at_3_std
value: 2.02842672250621
- type: nauc_recall_at_5_diff1
value: 41.00436246072242
- type: nauc_recall_at_5_max
value: 29.413569555348023
- type: nauc_recall_at_5_std
value: 3.845214021958289
- type: ndcg_at_1
value: 34.701
- type: ndcg_at_10
value: 46.54
- type: ndcg_at_100
value: 51.754999999999995
- type: ndcg_at_1000
value: 53.71
- type: ndcg_at_20
value: 48.679
- type: ndcg_at_3
value: 40.892
- type: ndcg_at_5
value: 43.595
- type: precision_at_1
value: 34.701
- type: precision_at_10
value: 8.004
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.145
- type: precision_at_20
value: 4.632
- type: precision_at_3
value: 18.719
- type: precision_at_5
value: 13.245999999999999
- type: recall_at_1
value: 29.953999999999997
- type: recall_at_10
value: 60.246
- type: recall_at_100
value: 82.128
- type: recall_at_1000
value: 95.622
- type: recall_at_20
value: 67.756
- type: recall_at_3
value: 45.096000000000004
- type: recall_at_5
value: 51.9
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 44.718999999999994
- type: map_at_1
value: 28.383999999999997
- type: map_at_10
value: 38.422
- type: map_at_100
value: 40.058
- type: map_at_1000
value: 40.276
- type: map_at_20
value: 39.301
- type: map_at_3
value: 35.205
- type: map_at_5
value: 36.803999999999995
- type: mrr_at_1
value: 33.59683794466403
- type: mrr_at_10
value: 42.837536859275986
- type: mrr_at_100
value: 43.7501703455481
- type: mrr_at_1000
value: 43.79258407771123
- type: mrr_at_20
value: 43.36044710445095
- type: mrr_at_3
value: 40.15151515151516
- type: mrr_at_5
value: 41.74242424242425
- type: nauc_map_at_1000_diff1
value: 47.934826596875304
- type: nauc_map_at_1000_max
value: 32.39759438116062
- type: nauc_map_at_1000_std
value: 0.9489007346763054
- type: nauc_map_at_100_diff1
value: 47.94844822157888
- type: nauc_map_at_100_max
value: 32.51485845519537
- type: nauc_map_at_100_std
value: 0.8094339925545622
- type: nauc_map_at_10_diff1
value: 48.251456404874645
- type: nauc_map_at_10_max
value: 31.412906399154245
- type: nauc_map_at_10_std
value: -0.7024825737369933
- type: nauc_map_at_1_diff1
value: 55.81906101970174
- type: nauc_map_at_1_max
value: 31.811715334193796
- type: nauc_map_at_1_std
value: -6.17056859281584
- type: nauc_map_at_20_diff1
value: 47.80902650237369
- type: nauc_map_at_20_max
value: 32.22465403023091
- type: nauc_map_at_20_std
value: 0.20706526946705656
- type: nauc_map_at_3_diff1
value: 49.97333984346632
- type: nauc_map_at_3_max
value: 31.58195498640799
- type: nauc_map_at_3_std
value: -2.577539707727459
- type: nauc_map_at_5_diff1
value: 49.40005767350608
- type: nauc_map_at_5_max
value: 30.998435600377434
- type: nauc_map_at_5_std
value: -2.1231771618690307
- type: nauc_mrr_at_1000_diff1
value: 46.86811371969663
- type: nauc_mrr_at_1000_max
value: 31.25147138171024
- type: nauc_mrr_at_1000_std
value: 1.9954422477585918
- type: nauc_mrr_at_100_diff1
value: 46.855870345882195
- type: nauc_mrr_at_100_max
value: 31.263524035665966
- type: nauc_mrr_at_100_std
value: 2.0160751193806568
- type: nauc_mrr_at_10_diff1
value: 46.93294772825783
- type: nauc_mrr_at_10_max
value: 30.927002048701663
- type: nauc_mrr_at_10_std
value: 1.6538220080908224
- type: nauc_mrr_at_1_diff1
value: 52.416386548395664
- type: nauc_mrr_at_1_max
value: 32.28582003787206
- type: nauc_mrr_at_1_std
value: -2.154991145714492
- type: nauc_mrr_at_20_diff1
value: 46.71796185319694
- type: nauc_mrr_at_20_max
value: 31.16219902794994
- type: nauc_mrr_at_20_std
value: 1.8590646572728409
- type: nauc_mrr_at_3_diff1
value: 47.697100317669914
- type: nauc_mrr_at_3_max
value: 30.821806030159383
- type: nauc_mrr_at_3_std
value: 1.1927626358099177
- type: nauc_mrr_at_5_diff1
value: 47.065272061365704
- type: nauc_mrr_at_5_max
value: 30.299230962805023
- type: nauc_mrr_at_5_std
value: 1.3225842862629529
- type: nauc_ndcg_at_1000_diff1
value: 45.20612583136058
- type: nauc_ndcg_at_1000_max
value: 33.51931869947315
- type: nauc_ndcg_at_1000_std
value: 4.923707509620363
- type: nauc_ndcg_at_100_diff1
value: 44.76206243393775
- type: nauc_ndcg_at_100_max
value: 33.57771606755598
- type: nauc_ndcg_at_100_std
value: 5.30915563331338
- type: nauc_ndcg_at_10_diff1
value: 45.12714032463827
- type: nauc_ndcg_at_10_max
value: 30.351909495610492
- type: nauc_ndcg_at_10_std
value: 2.3972947289996873
- type: nauc_ndcg_at_1_diff1
value: 52.416386548395664
- type: nauc_ndcg_at_1_max
value: 32.28582003787206
- type: nauc_ndcg_at_1_std
value: -2.154991145714492
- type: nauc_ndcg_at_20_diff1
value: 44.20281844000005
- type: nauc_ndcg_at_20_max
value: 32.14112739396226
- type: nauc_ndcg_at_20_std
value: 3.3971385462591916
- type: nauc_ndcg_at_3_diff1
value: 47.0633767031858
- type: nauc_ndcg_at_3_max
value: 31.032896053733435
- type: nauc_ndcg_at_3_std
value: 0.6827544906310201
- type: nauc_ndcg_at_5_diff1
value: 46.735352294106484
- type: nauc_ndcg_at_5_max
value: 29.784992270528544
- type: nauc_ndcg_at_5_std
value: 0.8685943819516141
- type: nauc_precision_at_1000_diff1
value: -12.223330179860852
- type: nauc_precision_at_1000_max
value: -9.266492213777273
- type: nauc_precision_at_1000_std
value: 19.0569899587788
- type: nauc_precision_at_100_diff1
value: -5.803751085072067
- type: nauc_precision_at_100_max
value: 3.448932057044294
- type: nauc_precision_at_100_std
value: 23.470863527030627
- type: nauc_precision_at_10_diff1
value: 8.887357341361907
- type: nauc_precision_at_10_max
value: 18.67165390928126
- type: nauc_precision_at_10_std
value: 19.158543337955404
- type: nauc_precision_at_1_diff1
value: 52.416386548395664
- type: nauc_precision_at_1_max
value: 32.28582003787206
- type: nauc_precision_at_1_std
value: -2.154991145714492
- type: nauc_precision_at_20_diff1
value: 0.942496138409553
- type: nauc_precision_at_20_max
value: 18.86957127610774
- type: nauc_precision_at_20_std
value: 24.075503903246496
- type: nauc_precision_at_3_diff1
value: 28.15363877307106
- type: nauc_precision_at_3_max
value: 27.064928137991824
- type: nauc_precision_at_3_std
value: 8.632807104504753
- type: nauc_precision_at_5_diff1
value: 20.805862332497973
- type: nauc_precision_at_5_max
value: 21.420201475758404
- type: nauc_precision_at_5_std
value: 12.380239645425714
- type: nauc_recall_at_1000_diff1
value: 18.478341468055547
- type: nauc_recall_at_1000_max
value: 56.293560115074506
- type: nauc_recall_at_1000_std
value: 64.31607185065428
- type: nauc_recall_at_100_diff1
value: 26.737267337771886
- type: nauc_recall_at_100_max
value: 38.011889141496326
- type: nauc_recall_at_100_std
value: 30.44904690114732
- type: nauc_recall_at_10_diff1
value: 35.22772732735716
- type: nauc_recall_at_10_max
value: 26.000054115159486
- type: nauc_recall_at_10_std
value: 5.174264254271206
- type: nauc_recall_at_1_diff1
value: 55.81906101970174
- type: nauc_recall_at_1_max
value: 31.811715334193796
- type: nauc_recall_at_1_std
value: -6.17056859281584
- type: nauc_recall_at_20_diff1
value: 30.48493302415641
- type: nauc_recall_at_20_max
value: 31.05487040370753
- type: nauc_recall_at_20_std
value: 10.319948318834136
- type: nauc_recall_at_3_diff1
value: 43.12289512340243
- type: nauc_recall_at_3_max
value: 28.176279771026135
- type: nauc_recall_at_3_std
value: -0.1775154523381921
- type: nauc_recall_at_5_diff1
value: 40.9934933741234
- type: nauc_recall_at_5_max
value: 25.569156290584733
- type: nauc_recall_at_5_std
value: 0.21166696686855038
- type: ndcg_at_1
value: 33.597
- type: ndcg_at_10
value: 44.718999999999994
- type: ndcg_at_100
value: 50.324000000000005
- type: ndcg_at_1000
value: 52.468
- type: ndcg_at_20
value: 46.822
- type: ndcg_at_3
value: 39.558
- type: ndcg_at_5
value: 41.827999999999996
- type: precision_at_1
value: 33.597
- type: precision_at_10
value: 8.735
- type: precision_at_100
value: 1.6420000000000001
- type: precision_at_1000
value: 0.246
- type: precision_at_20
value: 5.375
- type: precision_at_3
value: 18.511
- type: precision_at_5
value: 13.399
- type: recall_at_1
value: 28.383999999999997
- type: recall_at_10
value: 56.425000000000004
- type: recall_at_100
value: 82.01899999999999
- type: recall_at_1000
value: 95.285
- type: recall_at_20
value: 64.615
- type: recall_at_3
value: 42.171
- type: recall_at_5
value: 48.296
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 38.269999999999996
- type: map_at_1
value: 25.324999999999996
- type: map_at_10
value: 33.263
- type: map_at_100
value: 34.304
- type: map_at_1000
value: 34.394000000000005
- type: map_at_20
value: 33.827
- type: map_at_3
value: 30.259999999999998
- type: map_at_5
value: 31.832
- type: mrr_at_1
value: 27.171903881700555
- type: mrr_at_10
value: 35.334991051257234
- type: mrr_at_100
value: 36.251283465952355
- type: mrr_at_1000
value: 36.316236092511055
- type: mrr_at_20
value: 35.87141909945257
- type: mrr_at_3
value: 32.71719038817007
- type: mrr_at_5
value: 34.19593345656194
- type: nauc_map_at_1000_diff1
value: 39.614836211522714
- type: nauc_map_at_1000_max
value: 22.019768626310192
- type: nauc_map_at_1000_std
value: -1.5238708712112499
- type: nauc_map_at_100_diff1
value: 39.63008548572307
- type: nauc_map_at_100_max
value: 22.044756063752345
- type: nauc_map_at_100_std
value: -1.4869190221494792
- type: nauc_map_at_10_diff1
value: 39.73025012395569
- type: nauc_map_at_10_max
value: 22.117710178892107
- type: nauc_map_at_10_std
value: -2.5129984871932973
- type: nauc_map_at_1_diff1
value: 45.015617718902654
- type: nauc_map_at_1_max
value: 19.313800263189638
- type: nauc_map_at_1_std
value: -4.763931386681675
- type: nauc_map_at_20_diff1
value: 39.53678019013766
- type: nauc_map_at_20_max
value: 21.880316719428258
- type: nauc_map_at_20_std
value: -1.882003994523355
- type: nauc_map_at_3_diff1
value: 40.37307665298228
- type: nauc_map_at_3_max
value: 20.851976075322533
- type: nauc_map_at_3_std
value: -2.429569082966531
- type: nauc_map_at_5_diff1
value: 39.763015635086
- type: nauc_map_at_5_max
value: 22.010102196900725
- type: nauc_map_at_5_std
value: -2.654896415670943
- type: nauc_mrr_at_1000_diff1
value: 39.74071733680025
- type: nauc_mrr_at_1000_max
value: 21.67309640681989
- type: nauc_mrr_at_1000_std
value: -1.4003373135477462
- type: nauc_mrr_at_100_diff1
value: 39.730614151966485
- type: nauc_mrr_at_100_max
value: 21.678390048971767
- type: nauc_mrr_at_100_std
value: -1.3655362623563931
- type: nauc_mrr_at_10_diff1
value: 39.7900031013241
- type: nauc_mrr_at_10_max
value: 21.73643491725051
- type: nauc_mrr_at_10_std
value: -2.1175389838696312
- type: nauc_mrr_at_1_diff1
value: 46.165736140679776
- type: nauc_mrr_at_1_max
value: 20.071083446822147
- type: nauc_mrr_at_1_std
value: -5.018909100858311
- type: nauc_mrr_at_20_diff1
value: 39.6371295762885
- type: nauc_mrr_at_20_max
value: 21.659557440270973
- type: nauc_mrr_at_20_std
value: -1.4909603958341686
- type: nauc_mrr_at_3_diff1
value: 40.351150322758876
- type: nauc_mrr_at_3_max
value: 20.83706249041544
- type: nauc_mrr_at_3_std
value: -1.956027373253151
- type: nauc_mrr_at_5_diff1
value: 39.57759107791911
- type: nauc_mrr_at_5_max
value: 21.79552045204151
- type: nauc_mrr_at_5_std
value: -2.1507013120951126
- type: nauc_ndcg_at_1000_diff1
value: 37.717619356839016
- type: nauc_ndcg_at_1000_max
value: 22.545375504379805
- type: nauc_ndcg_at_1000_std
value: 1.682348628141016
- type: nauc_ndcg_at_100_diff1
value: 37.656027803682626
- type: nauc_ndcg_at_100_max
value: 22.49278246383637
- type: nauc_ndcg_at_100_std
value: 2.6818118152357773
- type: nauc_ndcg_at_10_diff1
value: 37.834954205539766
- type: nauc_ndcg_at_10_max
value: 22.655839885558443
- type: nauc_ndcg_at_10_std
value: -1.97159619786231
- type: nauc_ndcg_at_1_diff1
value: 46.165736140679776
- type: nauc_ndcg_at_1_max
value: 20.071083446822147
- type: nauc_ndcg_at_1_std
value: -5.018909100858311
- type: nauc_ndcg_at_20_diff1
value: 37.171914857454304
- type: nauc_ndcg_at_20_max
value: 21.858904801745897
- type: nauc_ndcg_at_20_std
value: 0.3809854859496657
- type: nauc_ndcg_at_3_diff1
value: 38.4460623883955
- type: nauc_ndcg_at_3_max
value: 20.95244159463402
- type: nauc_ndcg_at_3_std
value: -1.2685011660086651
- type: nauc_ndcg_at_5_diff1
value: 37.48831054573054
- type: nauc_ndcg_at_5_max
value: 22.625921624640526
- type: nauc_ndcg_at_5_std
value: -2.049221092724925
- type: nauc_precision_at_1000_diff1
value: -19.120500628263994
- type: nauc_precision_at_1000_max
value: -6.650707109047473
- type: nauc_precision_at_1000_std
value: 15.71193179253002
- type: nauc_precision_at_100_diff1
value: 6.254606806876069
- type: nauc_precision_at_100_max
value: 14.601826922181823
- type: nauc_precision_at_100_std
value: 28.38299592246453
- type: nauc_precision_at_10_diff1
value: 22.978614338670816
- type: nauc_precision_at_10_max
value: 23.04146766323557
- type: nauc_precision_at_10_std
value: 6.226264308612577
- type: nauc_precision_at_1_diff1
value: 46.165736140679776
- type: nauc_precision_at_1_max
value: 20.071083446822147
- type: nauc_precision_at_1_std
value: -5.018909100858311
- type: nauc_precision_at_20_diff1
value: 17.681032853225602
- type: nauc_precision_at_20_max
value: 18.66680304585122
- type: nauc_precision_at_20_std
value: 15.34896796713905
- type: nauc_precision_at_3_diff1
value: 31.359396694559194
- type: nauc_precision_at_3_max
value: 22.279263308973274
- type: nauc_precision_at_3_std
value: 3.6302537979529035
- type: nauc_precision_at_5_diff1
value: 26.32257879892933
- type: nauc_precision_at_5_max
value: 25.402524493181026
- type: nauc_precision_at_5_std
value: 4.731450603747359
- type: nauc_recall_at_1000_diff1
value: 23.562925244967875
- type: nauc_recall_at_1000_max
value: 30.737399333586797
- type: nauc_recall_at_1000_std
value: 34.19418935008663
- type: nauc_recall_at_100_diff1
value: 28.703574970574824
- type: nauc_recall_at_100_max
value: 22.448663600170278
- type: nauc_recall_at_100_std
value: 24.53297349042035
- type: nauc_recall_at_10_diff1
value: 31.73603907811882
- type: nauc_recall_at_10_max
value: 23.453183748640765
- type: nauc_recall_at_10_std
value: -1.8279054407176274
- type: nauc_recall_at_1_diff1
value: 45.015617718902654
- type: nauc_recall_at_1_max
value: 19.313800263189638
- type: nauc_recall_at_1_std
value: -4.763931386681675
- type: nauc_recall_at_20_diff1
value: 28.74169081866096
- type: nauc_recall_at_20_max
value: 20.035509169577324
- type: nauc_recall_at_20_std
value: 7.371615811227748
- type: nauc_recall_at_3_diff1
value: 34.09890157333362
- type: nauc_recall_at_3_max
value: 20.46565842748346
- type: nauc_recall_at_3_std
value: -0.4337283067447526
- type: nauc_recall_at_5_diff1
value: 30.974580787842402
- type: nauc_recall_at_5_max
value: 23.76379349487105
- type: nauc_recall_at_5_std
value: -1.8407515927979428
- type: ndcg_at_1
value: 27.172
- type: ndcg_at_10
value: 38.269999999999996
- type: ndcg_at_100
value: 43.338
- type: ndcg_at_1000
value: 45.594
- type: ndcg_at_20
value: 40.256
- type: ndcg_at_3
value: 32.673
- type: ndcg_at_5
value: 35.224
- type: precision_at_1
value: 27.172
- type: precision_at_10
value: 6.063000000000001
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.123
- type: precision_at_20
value: 3.5029999999999997
- type: precision_at_3
value: 13.74
- type: precision_at_5
value: 9.797
- type: recall_at_1
value: 25.324999999999996
- type: recall_at_10
value: 51.634
- type: recall_at_100
value: 74.687
- type: recall_at_1000
value: 91.412
- type: recall_at_20
value: 59.207
- type: recall_at_3
value: 36.678
- type: recall_at_5
value: 42.742999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 36.853
- type: map_at_1
value: 15.371000000000002
- type: map_at_10
value: 27.122
- type: map_at_100
value: 29.226000000000003
- type: map_at_1000
value: 29.409999999999997
- type: map_at_20
value: 28.274
- type: map_at_3
value: 22.431
- type: map_at_5
value: 24.877
- type: mrr_at_1
value: 34.13680781758958
- type: mrr_at_10
value: 47.265911793599145
- type: mrr_at_100
value: 48.028369995763846
- type: mrr_at_1000
value: 48.05317022537804
- type: mrr_at_20
value: 47.75785292259516
- type: mrr_at_3
value: 43.887079261672156
- type: mrr_at_5
value: 45.906623235613544
- type: nauc_map_at_1000_diff1
value: 24.949211292921547
- type: nauc_map_at_1000_max
value: 38.69844483304584
- type: nauc_map_at_1000_std
value: 18.336359440844753
- type: nauc_map_at_100_diff1
value: 24.8951732982492
- type: nauc_map_at_100_max
value: 38.65049158594052
- type: nauc_map_at_100_std
value: 18.28935278388095
- type: nauc_map_at_10_diff1
value: 24.606032216798273
- type: nauc_map_at_10_max
value: 38.00608351559887
- type: nauc_map_at_10_std
value: 16.61261615173358
- type: nauc_map_at_1_diff1
value: 30.83614944448221
- type: nauc_map_at_1_max
value: 33.757528532809
- type: nauc_map_at_1_std
value: 8.880622713261126
- type: nauc_map_at_20_diff1
value: 24.75491310922017
- type: nauc_map_at_20_max
value: 38.353679076398834
- type: nauc_map_at_20_std
value: 17.58637493443171
- type: nauc_map_at_3_diff1
value: 25.563085273287083
- type: nauc_map_at_3_max
value: 35.14515679047155
- type: nauc_map_at_3_std
value: 11.75594869817732
- type: nauc_map_at_5_diff1
value: 24.815807517691614
- type: nauc_map_at_5_max
value: 36.25905426665983
- type: nauc_map_at_5_std
value: 14.516391726180697
- type: nauc_mrr_at_1000_diff1
value: 27.948233427121274
- type: nauc_mrr_at_1000_max
value: 37.5893640945859
- type: nauc_mrr_at_1000_std
value: 19.588442449629763
- type: nauc_mrr_at_100_diff1
value: 27.947962345854037
- type: nauc_mrr_at_100_max
value: 37.60375479481945
- type: nauc_mrr_at_100_std
value: 19.614791576283793
- type: nauc_mrr_at_10_diff1
value: 27.882311310262136
- type: nauc_mrr_at_10_max
value: 37.58580968074054
- type: nauc_mrr_at_10_std
value: 19.49875186170201
- type: nauc_mrr_at_1_diff1
value: 28.017413073648477
- type: nauc_mrr_at_1_max
value: 32.87710191514022
- type: nauc_mrr_at_1_std
value: 14.04889142608459
- type: nauc_mrr_at_20_diff1
value: 27.89129925771968
- type: nauc_mrr_at_20_max
value: 37.6142863106945
- type: nauc_mrr_at_20_std
value: 19.645390143394163
- type: nauc_mrr_at_3_diff1
value: 27.99609559690795
- type: nauc_mrr_at_3_max
value: 36.87362332456197
- type: nauc_mrr_at_3_std
value: 18.598416821915333
- type: nauc_mrr_at_5_diff1
value: 27.68306089976716
- type: nauc_mrr_at_5_max
value: 37.12264485659723
- type: nauc_mrr_at_5_std
value: 19.18875305730564
- type: nauc_ndcg_at_1000_diff1
value: 25.736779186453777
- type: nauc_ndcg_at_1000_max
value: 41.93281139456004
- type: nauc_ndcg_at_1000_std
value: 25.179038422659993
- type: nauc_ndcg_at_100_diff1
value: 25.144796623848322
- type: nauc_ndcg_at_100_max
value: 41.72820916876173
- type: nauc_ndcg_at_100_std
value: 25.12851686850754
- type: nauc_ndcg_at_10_diff1
value: 24.321249191226652
- type: nauc_ndcg_at_10_max
value: 40.23711916935706
- type: nauc_ndcg_at_10_std
value: 20.89060972334557
- type: nauc_ndcg_at_1_diff1
value: 28.017413073648477
- type: nauc_ndcg_at_1_max
value: 32.87710191514022
- type: nauc_ndcg_at_1_std
value: 14.04889142608459
- type: nauc_ndcg_at_20_diff1
value: 24.5090484877482
- type: nauc_ndcg_at_20_max
value: 40.752854032983606
- type: nauc_ndcg_at_20_std
value: 22.70331074781384
- type: nauc_ndcg_at_3_diff1
value: 25.13499057756147
- type: nauc_ndcg_at_3_max
value: 35.8325682137567
- type: nauc_ndcg_at_3_std
value: 15.23768392706637
- type: nauc_ndcg_at_5_diff1
value: 24.614105695451116
- type: nauc_ndcg_at_5_max
value: 37.68089587624492
- type: nauc_ndcg_at_5_std
value: 17.946406099261708
- type: nauc_precision_at_1000_diff1
value: -2.022340544774227
- type: nauc_precision_at_1000_max
value: 6.070578645067797
- type: nauc_precision_at_1000_std
value: 22.15132728777549
- type: nauc_precision_at_100_diff1
value: 4.544144474504255
- type: nauc_precision_at_100_max
value: 19.780392159848574
- type: nauc_precision_at_100_std
value: 31.107111186002438
- type: nauc_precision_at_10_diff1
value: 10.107015022955848
- type: nauc_precision_at_10_max
value: 30.779709099060465
- type: nauc_precision_at_10_std
value: 27.324148451668602
- type: nauc_precision_at_1_diff1
value: 28.017413073648477
- type: nauc_precision_at_1_max
value: 32.87710191514022
- type: nauc_precision_at_1_std
value: 14.04889142608459
- type: nauc_precision_at_20_diff1
value: 8.270881053079405
- type: nauc_precision_at_20_max
value: 27.26753946078481
- type: nauc_precision_at_20_std
value: 29.156725822074204
- type: nauc_precision_at_3_diff1
value: 17.82468940497632
- type: nauc_precision_at_3_max
value: 31.490021174215155
- type: nauc_precision_at_3_std
value: 18.73818985054394
- type: nauc_precision_at_5_diff1
value: 13.24803141673961
- type: nauc_precision_at_5_max
value: 29.94926240784298
- type: nauc_precision_at_5_std
value: 23.2940906142919
- type: nauc_recall_at_1000_diff1
value: 19.09850333580471
- type: nauc_recall_at_1000_max
value: 46.026306142840596
- type: nauc_recall_at_1000_std
value: 46.50391519568263
- type: nauc_recall_at_100_diff1
value: 16.739384224869738
- type: nauc_recall_at_100_max
value: 40.68987136431252
- type: nauc_recall_at_100_std
value: 36.01609750485591
- type: nauc_recall_at_10_diff1
value: 17.51796617221814
- type: nauc_recall_at_10_max
value: 39.47453129444401
- type: nauc_recall_at_10_std
value: 23.79239002974899
- type: nauc_recall_at_1_diff1
value: 30.83614944448221
- type: nauc_recall_at_1_max
value: 33.757528532809
- type: nauc_recall_at_1_std
value: 8.880622713261126
- type: nauc_recall_at_20_diff1
value: 16.978668307251652
- type: nauc_recall_at_20_max
value: 39.09115357303713
- type: nauc_recall_at_20_std
value: 27.278668534187524
- type: nauc_recall_at_3_diff1
value: 22.55937738994021
- type: nauc_recall_at_3_max
value: 36.25055459395638
- type: nauc_recall_at_3_std
value: 14.828905168761247
- type: nauc_recall_at_5_diff1
value: 19.32656748627199
- type: nauc_recall_at_5_max
value: 36.28836228620816
- type: nauc_recall_at_5_std
value: 19.264352933914278
- type: ndcg_at_1
value: 34.137
- type: ndcg_at_10
value: 36.853
- type: ndcg_at_100
value: 44.279
- type: ndcg_at_1000
value: 47.336
- type: ndcg_at_20
value: 39.815
- type: ndcg_at_3
value: 30.253999999999998
- type: ndcg_at_5
value: 32.649
- type: precision_at_1
value: 34.137
- type: precision_at_10
value: 11.655
- type: precision_at_100
value: 1.9619999999999997
- type: precision_at_1000
value: 0.254
- type: precision_at_20
value: 7.1209999999999996
- type: precision_at_3
value: 22.823
- type: precision_at_5
value: 17.655
- type: recall_at_1
value: 15.371000000000002
- type: recall_at_10
value: 43.718
- type: recall_at_100
value: 68.81
- type: recall_at_1000
value: 85.69600000000001
- type: recall_at_20
value: 51.94
- type: recall_at_3
value: 27.694000000000003
- type: recall_at_5
value: 34.469
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 45.553
- type: map_at_1
value: 9.168999999999999
- type: map_at_10
value: 22.154
- type: map_at_100
value: 32.174
- type: map_at_1000
value: 33.974
- type: map_at_20
value: 25.899
- type: map_at_3
value: 15.275
- type: map_at_5
value: 18.291
- type: mrr_at_1
value: 70.75
- type: mrr_at_10
value: 78.39662698412697
- type: mrr_at_100
value: 78.56221458977012
- type: mrr_at_1000
value: 78.56669970642338
- type: mrr_at_20
value: 78.49688805346696
- type: mrr_at_3
value: 76.33333333333333
- type: mrr_at_5
value: 77.70833333333333
- type: nauc_map_at_1000_diff1
value: 18.465085922071346
- type: nauc_map_at_1000_max
value: 24.29804638788498
- type: nauc_map_at_1000_std
value: 22.380463943423514
- type: nauc_map_at_100_diff1
value: 19.37585410674523
- type: nauc_map_at_100_max
value: 22.56424042509462
- type: nauc_map_at_100_std
value: 19.672237275984426
- type: nauc_map_at_10_diff1
value: 23.597788166305577
- type: nauc_map_at_10_max
value: 9.157316105122925
- type: nauc_map_at_10_std
value: -3.8881247055786807
- type: nauc_map_at_1_diff1
value: 43.96699602275052
- type: nauc_map_at_1_max
value: -0.7577088440873263
- type: nauc_map_at_1_std
value: -17.732463891968404
- type: nauc_map_at_20_diff1
value: 22.326759054850097
- type: nauc_map_at_20_max
value: 14.879191412167703
- type: nauc_map_at_20_std
value: 5.405751236575241
- type: nauc_map_at_3_diff1
value: 28.73583545428074
- type: nauc_map_at_3_max
value: 1.5986597211018239
- type: nauc_map_at_3_std
value: -16.512455883681515
- type: nauc_map_at_5_diff1
value: 25.401810959155057
- type: nauc_map_at_5_max
value: 4.418875376978587
- type: nauc_map_at_5_std
value: -12.296750992013052
- type: nauc_mrr_at_1000_diff1
value: 51.228801807498584
- type: nauc_mrr_at_1000_max
value: 61.040998883279585
- type: nauc_mrr_at_1000_std
value: 40.93983887257123
- type: nauc_mrr_at_100_diff1
value: 51.23715338435314
- type: nauc_mrr_at_100_max
value: 61.03971408781317
- type: nauc_mrr_at_100_std
value: 40.91796923590573
- type: nauc_mrr_at_10_diff1
value: 51.1214868552331
- type: nauc_mrr_at_10_max
value: 61.03069045590881
- type: nauc_mrr_at_10_std
value: 40.661621199704264
- type: nauc_mrr_at_1_diff1
value: 50.84660003035892
- type: nauc_mrr_at_1_max
value: 60.692091499960895
- type: nauc_mrr_at_1_std
value: 42.126228731502955
- type: nauc_mrr_at_20_diff1
value: 51.0402624284872
- type: nauc_mrr_at_20_max
value: 60.94577844338166
- type: nauc_mrr_at_20_std
value: 40.89505950503613
- type: nauc_mrr_at_3_diff1
value: 51.771113665996516
- type: nauc_mrr_at_3_max
value: 61.65264793077224
- type: nauc_mrr_at_3_std
value: 41.75781827057092
- type: nauc_mrr_at_5_diff1
value: 51.0656793772882
- type: nauc_mrr_at_5_max
value: 61.08042065139715
- type: nauc_mrr_at_5_std
value: 41.11203271084835
- type: nauc_ndcg_at_1000_diff1
value: 22.347978262245107
- type: nauc_ndcg_at_1000_max
value: 36.56458763955002
- type: nauc_ndcg_at_1000_std
value: 35.99616144258822
- type: nauc_ndcg_at_100_diff1
value: 23.1120990977162
- type: nauc_ndcg_at_100_max
value: 30.79663306311657
- type: nauc_ndcg_at_100_std
value: 27.387572106784297
- type: nauc_ndcg_at_10_diff1
value: 23.329746066899656
- type: nauc_ndcg_at_10_max
value: 28.69246947084685
- type: nauc_ndcg_at_10_std
value: 21.457736188325345
- type: nauc_ndcg_at_1_diff1
value: 39.99399153456974
- type: nauc_ndcg_at_1_max
value: 38.12447856470389
- type: nauc_ndcg_at_1_std
value: 27.768869260384676
- type: nauc_ndcg_at_20_diff1
value: 24.945374175339907
- type: nauc_ndcg_at_20_max
value: 27.67836982165295
- type: nauc_ndcg_at_20_std
value: 19.7933631060578
- type: nauc_ndcg_at_3_diff1
value: 26.063492354398527
- type: nauc_ndcg_at_3_max
value: 33.06541959550656
- type: nauc_ndcg_at_3_std
value: 23.278902797288726
- type: nauc_ndcg_at_5_diff1
value: 22.521596060750035
- type: nauc_ndcg_at_5_max
value: 31.210005673730784
- type: nauc_ndcg_at_5_std
value: 22.893106456317927
- type: nauc_precision_at_1000_diff1
value: -19.845356495096006
- type: nauc_precision_at_1000_max
value: 4.163819381816099
- type: nauc_precision_at_1000_std
value: 7.612952884590339
- type: nauc_precision_at_100_diff1
value: -8.2679285153361
- type: nauc_precision_at_100_max
value: 29.78018175573565
- type: nauc_precision_at_100_std
value: 41.07244463956215
- type: nauc_precision_at_10_diff1
value: -3.2451428407349057
- type: nauc_precision_at_10_max
value: 36.92563008274906
- type: nauc_precision_at_10_std
value: 45.06962043489777
- type: nauc_precision_at_1_diff1
value: 50.84660003035892
- type: nauc_precision_at_1_max
value: 60.692091499960895
- type: nauc_precision_at_1_std
value: 42.126228731502955
- type: nauc_precision_at_20_diff1
value: -3.432279149061878
- type: nauc_precision_at_20_max
value: 37.013592483974875
- type: nauc_precision_at_20_std
value: 46.47324739428665
- type: nauc_precision_at_3_diff1
value: 7.28495481051025
- type: nauc_precision_at_3_max
value: 38.66372411741402
- type: nauc_precision_at_3_std
value: 35.23163993723955
- type: nauc_precision_at_5_diff1
value: -0.16540230063716202
- type: nauc_precision_at_5_max
value: 37.322494255721715
- type: nauc_precision_at_5_std
value: 39.666653561269754
- type: nauc_recall_at_1000_diff1
value: 11.388326469283681
- type: nauc_recall_at_1000_max
value: 32.698146308591674
- type: nauc_recall_at_1000_std
value: 49.48830488070777
- type: nauc_recall_at_100_diff1
value: 11.497443532756819
- type: nauc_recall_at_100_max
value: 20.196970431621615
- type: nauc_recall_at_100_std
value: 23.688772100803433
- type: nauc_recall_at_10_diff1
value: 16.519851398596003
- type: nauc_recall_at_10_max
value: 0.774066845071221
- type: nauc_recall_at_10_std
value: -10.89514647001814
- type: nauc_recall_at_1_diff1
value: 43.96699602275052
- type: nauc_recall_at_1_max
value: -0.7577088440873263
- type: nauc_recall_at_1_std
value: -17.732463891968404
- type: nauc_recall_at_20_diff1
value: 15.202960269878258
- type: nauc_recall_at_20_max
value: 7.067263295590253
- type: nauc_recall_at_20_std
value: -0.06050108222640702
- type: nauc_recall_at_3_diff1
value: 24.066741361525125
- type: nauc_recall_at_3_max
value: -2.1961525860488424
- type: nauc_recall_at_3_std
value: -19.48307077749568
- type: nauc_recall_at_5_diff1
value: 20.086330794102707
- type: nauc_recall_at_5_max
value: -0.8866528062747986
- type: nauc_recall_at_5_std
value: -16.53799173962747
- type: ndcg_at_1
value: 57.99999999999999
- type: ndcg_at_10
value: 45.553
- type: ndcg_at_100
value: 51.014
- type: ndcg_at_1000
value: 58.226
- type: ndcg_at_20
value: 44.98
- type: ndcg_at_3
value: 48.981
- type: ndcg_at_5
value: 46.794999999999995
- type: precision_at_1
value: 70.75
- type: precision_at_10
value: 36.85
- type: precision_at_100
value: 11.955
- type: precision_at_1000
value: 2.247
- type: precision_at_20
value: 28.075
- type: precision_at_3
value: 52.666999999999994
- type: precision_at_5
value: 45.85
- type: recall_at_1
value: 9.168999999999999
- type: recall_at_10
value: 28.796
- type: recall_at_100
value: 58.892999999999994
- type: recall_at_1000
value: 81.644
- type: recall_at_20
value: 36.659000000000006
- type: recall_at_3
value: 16.709
- type: recall_at_5
value: 21.387
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 88.41
- type: map_at_1
value: 75.637
- type: map_at_10
value: 84.674
- type: map_at_100
value: 84.909
- type: map_at_1000
value: 84.92
- type: map_at_20
value: 84.836
- type: map_at_3
value: 83.44200000000001
- type: map_at_5
value: 84.28099999999999
- type: mrr_at_1
value: 81.56315631563157
- type: mrr_at_10
value: 88.89571695264748
- type: mrr_at_100
value: 88.93671417216285
- type: mrr_at_1000
value: 88.93708016011664
- type: mrr_at_20
value: 88.9311652665256
- type: mrr_at_3
value: 88.20882088208805
- type: mrr_at_5
value: 88.72937293729349
- type: nauc_map_at_1000_diff1
value: 54.41216035074026
- type: nauc_map_at_1000_max
value: 13.346153003554361
- type: nauc_map_at_1000_std
value: -6.721664416152164
- type: nauc_map_at_100_diff1
value: 54.36538350995795
- type: nauc_map_at_100_max
value: 13.355583381471298
- type: nauc_map_at_100_std
value: -6.696921015641016
- type: nauc_map_at_10_diff1
value: 54.0389127730555
- type: nauc_map_at_10_max
value: 13.387802159150663
- type: nauc_map_at_10_std
value: -6.73514381731833
- type: nauc_map_at_1_diff1
value: 57.99489574836453
- type: nauc_map_at_1_max
value: 7.830032589171654
- type: nauc_map_at_1_std
value: -10.140208285080295
- type: nauc_map_at_20_diff1
value: 54.16841004736076
- type: nauc_map_at_20_max
value: 13.345607363689746
- type: nauc_map_at_20_std
value: -6.663119775158465
- type: nauc_map_at_3_diff1
value: 53.82879543599303
- type: nauc_map_at_3_max
value: 12.716952288433902
- type: nauc_map_at_3_std
value: -7.746102082835598
- type: nauc_map_at_5_diff1
value: 53.82838395350109
- type: nauc_map_at_5_max
value: 13.487373534211702
- type: nauc_map_at_5_std
value: -6.869504398693434
- type: nauc_mrr_at_1000_diff1
value: 68.92783546581906
- type: nauc_mrr_at_1000_max
value: 12.076297180596592
- type: nauc_mrr_at_1000_std
value: -13.306257067567998
- type: nauc_mrr_at_100_diff1
value: 68.92780219775517
- type: nauc_mrr_at_100_max
value: 12.078449805054374
- type: nauc_mrr_at_100_std
value: -13.303524852703719
- type: nauc_mrr_at_10_diff1
value: 68.92686206881258
- type: nauc_mrr_at_10_max
value: 12.273295656884873
- type: nauc_mrr_at_10_std
value: -13.222483496603965
- type: nauc_mrr_at_1_diff1
value: 70.1738022073041
- type: nauc_mrr_at_1_max
value: 9.378639533482806
- type: nauc_mrr_at_1_std
value: -13.444033823202348
- type: nauc_mrr_at_20_diff1
value: 68.91161304905303
- type: nauc_mrr_at_20_max
value: 12.117091514817885
- type: nauc_mrr_at_20_std
value: -13.258261750160239
- type: nauc_mrr_at_3_diff1
value: 68.61982455945467
- type: nauc_mrr_at_3_max
value: 12.608213879734578
- type: nauc_mrr_at_3_std
value: -13.558003431587839
- type: nauc_mrr_at_5_diff1
value: 68.81439097457242
- type: nauc_mrr_at_5_max
value: 12.54025598903624
- type: nauc_mrr_at_5_std
value: -13.199231514972093
- type: nauc_ndcg_at_1000_diff1
value: 56.47563443877495
- type: nauc_ndcg_at_1000_max
value: 14.508331783439466
- type: nauc_ndcg_at_1000_std
value: -6.206829736668775
- type: nauc_ndcg_at_100_diff1
value: 55.54015515673474
- type: nauc_ndcg_at_100_max
value: 14.753595778278136
- type: nauc_ndcg_at_100_std
value: -5.638517949568802
- type: nauc_ndcg_at_10_diff1
value: 54.220845223257996
- type: nauc_ndcg_at_10_max
value: 15.265309648490021
- type: nauc_ndcg_at_10_std
value: -5.516276098929109
- type: nauc_ndcg_at_1_diff1
value: 70.1738022073041
- type: nauc_ndcg_at_1_max
value: 9.378639533482806
- type: nauc_ndcg_at_1_std
value: -13.444033823202348
- type: nauc_ndcg_at_20_diff1
value: 54.481406100854635
- type: nauc_ndcg_at_20_max
value: 14.868763583210498
- type: nauc_ndcg_at_20_std
value: -5.328097380018734
- type: nauc_ndcg_at_3_diff1
value: 54.94411725607744
- type: nauc_ndcg_at_3_max
value: 14.27186734506607
- type: nauc_ndcg_at_3_std
value: -7.894724962312474
- type: nauc_ndcg_at_5_diff1
value: 54.08048166974806
- type: nauc_ndcg_at_5_max
value: 15.528233170721006
- type: nauc_ndcg_at_5_std
value: -5.984768714537104
- type: nauc_precision_at_1000_diff1
value: -8.744323640074445
- type: nauc_precision_at_1000_max
value: -0.01881224392053465
- type: nauc_precision_at_1000_std
value: 3.8721477979260635
- type: nauc_precision_at_100_diff1
value: -11.86150156952171
- type: nauc_precision_at_100_max
value: 3.2736651314552314
- type: nauc_precision_at_100_std
value: 8.12687620615509
- type: nauc_precision_at_10_diff1
value: -10.360708676781178
- type: nauc_precision_at_10_max
value: 10.945552490433458
- type: nauc_precision_at_10_std
value: 11.016707653014485
- type: nauc_precision_at_1_diff1
value: 70.1738022073041
- type: nauc_precision_at_1_max
value: 9.378639533482806
- type: nauc_precision_at_1_std
value: -13.444033823202348
- type: nauc_precision_at_20_diff1
value: -13.557721925696583
- type: nauc_precision_at_20_max
value: 6.331386521718574
- type: nauc_precision_at_20_std
value: 10.322188778142388
- type: nauc_precision_at_3_diff1
value: 15.139456770248968
- type: nauc_precision_at_3_max
value: 17.10220985600708
- type: nauc_precision_at_3_std
value: 3.0448183682558074
- type: nauc_precision_at_5_diff1
value: -1.9825577548111102
- type: nauc_precision_at_5_max
value: 17.139148127012625
- type: nauc_precision_at_5_std
value: 10.598435750554753
- type: nauc_recall_at_1000_diff1
value: 15.641740744283005
- type: nauc_recall_at_1000_max
value: 44.65315702195612
- type: nauc_recall_at_1000_std
value: 52.34265862835513
- type: nauc_recall_at_100_diff1
value: 5.254385435323394
- type: nauc_recall_at_100_max
value: 38.53577774395794
- type: nauc_recall_at_100_std
value: 43.47744274335829
- type: nauc_recall_at_10_diff1
value: 19.135735476268042
- type: nauc_recall_at_10_max
value: 30.05417445923848
- type: nauc_recall_at_10_std
value: 18.3988023241141
- type: nauc_recall_at_1_diff1
value: 57.99489574836453
- type: nauc_recall_at_1_max
value: 7.830032589171654
- type: nauc_recall_at_1_std
value: -10.140208285080295
- type: nauc_recall_at_20_diff1
value: 9.444797759735126
- type: nauc_recall_at_20_max
value: 31.001311675371017
- type: nauc_recall_at_20_std
value: 29.351418893822178
- type: nauc_recall_at_3_diff1
value: 36.88862653262064
- type: nauc_recall_at_3_max
value: 19.845892741607823
- type: nauc_recall_at_3_std
value: -1.0584273105890794
- type: nauc_recall_at_5_diff1
value: 27.360718561944974
- type: nauc_recall_at_5_max
value: 26.698311215441738
- type: nauc_recall_at_5_std
value: 8.97113997755362
- type: ndcg_at_1
value: 81.563
- type: ndcg_at_10
value: 88.41
- type: ndcg_at_100
value: 89.101
- type: ndcg_at_1000
value: 89.25800000000001
- type: ndcg_at_20
value: 88.79
- type: ndcg_at_3
value: 86.599
- type: ndcg_at_5
value: 87.74
- type: precision_at_1
value: 81.563
- type: precision_at_10
value: 10.699
- type: precision_at_100
value: 1.13
- type: precision_at_1000
value: 0.116
- type: precision_at_20
value: 5.479
- type: precision_at_3
value: 33.238
- type: precision_at_5
value: 20.744
- type: recall_at_1
value: 75.637
- type: recall_at_10
value: 95.57600000000001
- type: recall_at_100
value: 98.072
- type: recall_at_1000
value: 98.951
- type: recall_at_20
value: 96.792
- type: recall_at_3
value: 90.79599999999999
- type: recall_at_5
value: 93.674
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 42.396
- type: map_at_1
value: 21.711
- type: map_at_10
value: 34.628
- type: map_at_100
value: 36.549
- type: map_at_1000
value: 36.719
- type: map_at_20
value: 35.673
- type: map_at_3
value: 30.585
- type: map_at_5
value: 32.875
- type: mrr_at_1
value: 41.82098765432099
- type: mrr_at_10
value: 50.69505682931607
- type: mrr_at_100
value: 51.50556608727901
- type: mrr_at_1000
value: 51.53870583208304
- type: mrr_at_20
value: 51.15345764364655
- type: mrr_at_3
value: 48.35390946502059
- type: mrr_at_5
value: 49.87397119341563
- type: nauc_map_at_1000_diff1
value: 45.182252919583895
- type: nauc_map_at_1000_max
value: 35.66124930024801
- type: nauc_map_at_1000_std
value: -0.6925562638650965
- type: nauc_map_at_100_diff1
value: 45.116964706960125
- type: nauc_map_at_100_max
value: 35.54990469525889
- type: nauc_map_at_100_std
value: -0.6667263852859368
- type: nauc_map_at_10_diff1
value: 45.39189096228184
- type: nauc_map_at_10_max
value: 34.780111261901
- type: nauc_map_at_10_std
value: -1.8169859294150819
- type: nauc_map_at_1_diff1
value: 47.72764937952259
- type: nauc_map_at_1_max
value: 24.83306559709341
- type: nauc_map_at_1_std
value: -4.714128457297418
- type: nauc_map_at_20_diff1
value: 45.17073365898278
- type: nauc_map_at_20_max
value: 35.0938403469058
- type: nauc_map_at_20_std
value: -1.373412631183604
- type: nauc_map_at_3_diff1
value: 46.525724305731295
- type: nauc_map_at_3_max
value: 31.042538866512597
- type: nauc_map_at_3_std
value: -4.119355935975354
- type: nauc_map_at_5_diff1
value: 45.79569633383187
- type: nauc_map_at_5_max
value: 32.88779656647293
- type: nauc_map_at_5_std
value: -3.2518474739335312
- type: nauc_mrr_at_1000_diff1
value: 52.83619185487903
- type: nauc_mrr_at_1000_max
value: 42.30310720405186
- type: nauc_mrr_at_1000_std
value: -1.1487703348518024
- type: nauc_mrr_at_100_diff1
value: 52.82248853996664
- type: nauc_mrr_at_100_max
value: 42.30549701564678
- type: nauc_mrr_at_100_std
value: -1.1240113031894834
- type: nauc_mrr_at_10_diff1
value: 52.74644276642243
- type: nauc_mrr_at_10_max
value: 42.39103029476398
- type: nauc_mrr_at_10_std
value: -1.1043413237848576
- type: nauc_mrr_at_1_diff1
value: 54.810335521617326
- type: nauc_mrr_at_1_max
value: 40.733260207843394
- type: nauc_mrr_at_1_std
value: -4.452554921565855
- type: nauc_mrr_at_20_diff1
value: 52.788257862499954
- type: nauc_mrr_at_20_max
value: 42.32658875363406
- type: nauc_mrr_at_20_std
value: -1.2209728080684497
- type: nauc_mrr_at_3_diff1
value: 53.43281175319808
- type: nauc_mrr_at_3_max
value: 41.735942650867926
- type: nauc_mrr_at_3_std
value: -2.462688102468019
- type: nauc_mrr_at_5_diff1
value: 52.874037126566606
- type: nauc_mrr_at_5_max
value: 41.93740449458822
- type: nauc_mrr_at_5_std
value: -1.2928874908441947
- type: nauc_ndcg_at_1000_diff1
value: 46.5532425476402
- type: nauc_ndcg_at_1000_max
value: 40.369611603370515
- type: nauc_ndcg_at_1000_std
value: 3.472567588386994
- type: nauc_ndcg_at_100_diff1
value: 45.75244404695404
- type: nauc_ndcg_at_100_max
value: 39.36470550675439
- type: nauc_ndcg_at_100_std
value: 4.356189041115731
- type: nauc_ndcg_at_10_diff1
value: 46.005135323539704
- type: nauc_ndcg_at_10_max
value: 37.89018165334218
- type: nauc_ndcg_at_10_std
value: 0.7129618297768014
- type: nauc_ndcg_at_1_diff1
value: 54.810335521617326
- type: nauc_ndcg_at_1_max
value: 40.733260207843394
- type: nauc_ndcg_at_1_std
value: -4.452554921565855
- type: nauc_ndcg_at_20_diff1
value: 45.841552790490034
- type: nauc_ndcg_at_20_max
value: 38.04992825472661
- type: nauc_ndcg_at_20_std
value: 1.2748305707955212
- type: nauc_ndcg_at_3_diff1
value: 46.683033449357744
- type: nauc_ndcg_at_3_max
value: 37.46397870760607
- type: nauc_ndcg_at_3_std
value: -2.3421854966319824
- type: nauc_ndcg_at_5_diff1
value: 45.82409645378457
- type: nauc_ndcg_at_5_max
value: 36.27588234096716
- type: nauc_ndcg_at_5_std
value: -1.5141197170944254
- type: nauc_precision_at_1000_diff1
value: -3.137944321071885
- type: nauc_precision_at_1000_max
value: 24.12803166253776
- type: nauc_precision_at_1000_std
value: 11.076454789944101
- type: nauc_precision_at_100_diff1
value: 3.9896283891401048
- type: nauc_precision_at_100_max
value: 31.00198316788829
- type: nauc_precision_at_100_std
value: 15.725887643803063
- type: nauc_precision_at_10_diff1
value: 20.493420889888394
- type: nauc_precision_at_10_max
value: 41.689699671507405
- type: nauc_precision_at_10_std
value: 9.374983385669914
- type: nauc_precision_at_1_diff1
value: 54.810335521617326
- type: nauc_precision_at_1_max
value: 40.733260207843394
- type: nauc_precision_at_1_std
value: -4.452554921565855
- type: nauc_precision_at_20_diff1
value: 15.02911800246446
- type: nauc_precision_at_20_max
value: 39.227068888505
- type: nauc_precision_at_20_std
value: 11.755558515319404
- type: nauc_precision_at_3_diff1
value: 34.044986535461746
- type: nauc_precision_at_3_max
value: 40.96605829831656
- type: nauc_precision_at_3_std
value: 1.1903535705688038
- type: nauc_precision_at_5_diff1
value: 26.617002443432707
- type: nauc_precision_at_5_max
value: 40.60413785916794
- type: nauc_precision_at_5_std
value: 3.6984531670502814
- type: nauc_recall_at_1000_diff1
value: 26.96489389440101
- type: nauc_recall_at_1000_max
value: 41.811583968523955
- type: nauc_recall_at_1000_std
value: 41.5719519496712
- type: nauc_recall_at_100_diff1
value: 28.50851434908223
- type: nauc_recall_at_100_max
value: 32.19528060706322
- type: nauc_recall_at_100_std
value: 25.56935294258179
- type: nauc_recall_at_10_diff1
value: 35.139582891180964
- type: nauc_recall_at_10_max
value: 32.15221840434225
- type: nauc_recall_at_10_std
value: 5.550434611582702
- type: nauc_recall_at_1_diff1
value: 47.72764937952259
- type: nauc_recall_at_1_max
value: 24.83306559709341
- type: nauc_recall_at_1_std
value: -4.714128457297418
- type: nauc_recall_at_20_diff1
value: 32.78604811055205
- type: nauc_recall_at_20_max
value: 29.62940720700254
- type: nauc_recall_at_20_std
value: 6.769941491859872
- type: nauc_recall_at_3_diff1
value: 40.76090616138699
- type: nauc_recall_at_3_max
value: 27.506425490226867
- type: nauc_recall_at_3_std
value: -2.608872693119243
- type: nauc_recall_at_5_diff1
value: 37.06532485024711
- type: nauc_recall_at_5_max
value: 27.704150556658448
- type: nauc_recall_at_5_std
value: 0.4718707152343872
- type: ndcg_at_1
value: 41.821000000000005
- type: ndcg_at_10
value: 42.396
- type: ndcg_at_100
value: 49.370000000000005
- type: ndcg_at_1000
value: 52.251000000000005
- type: ndcg_at_20
value: 45.097
- type: ndcg_at_3
value: 39.028
- type: ndcg_at_5
value: 40.222
- type: precision_at_1
value: 41.821000000000005
- type: precision_at_10
value: 11.451
- type: precision_at_100
value: 1.863
- type: precision_at_1000
value: 0.23900000000000002
- type: precision_at_20
value: 6.798
- type: precision_at_3
value: 25.823
- type: precision_at_5
value: 18.735
- type: recall_at_1
value: 21.711
- type: recall_at_10
value: 48.862
- type: recall_at_100
value: 74.708
- type: recall_at_1000
value: 91.865
- type: recall_at_20
value: 57.50999999999999
- type: recall_at_3
value: 35.85
- type: recall_at_5
value: 41.976
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 72.21
- type: map_at_1
value: 39.487
- type: map_at_10
value: 63.949999999999996
- type: map_at_100
value: 64.873
- type: map_at_1000
value: 64.927
- type: map_at_20
value: 64.529
- type: map_at_3
value: 60.243
- type: map_at_5
value: 62.613
- type: mrr_at_1
value: 78.97366644159351
- type: mrr_at_10
value: 84.84600173627825
- type: mrr_at_100
value: 85.0172804866798
- type: mrr_at_1000
value: 85.02245651152857
- type: mrr_at_20
value: 84.9625577788225
- type: mrr_at_3
value: 83.90276839972962
- type: mrr_at_5
value: 84.48278190411845
- type: nauc_map_at_1000_diff1
value: 19.825004700775164
- type: nauc_map_at_1000_max
value: 19.943221724164182
- type: nauc_map_at_1000_std
value: 10.068951166560058
- type: nauc_map_at_100_diff1
value: 19.80139472181137
- type: nauc_map_at_100_max
value: 19.938006132804347
- type: nauc_map_at_100_std
value: 10.100008107666842
- type: nauc_map_at_10_diff1
value: 19.53604502514735
- type: nauc_map_at_10_max
value: 19.62768870331064
- type: nauc_map_at_10_std
value: 9.446859074725705
- type: nauc_map_at_1_diff1
value: 67.7764270505257
- type: nauc_map_at_1_max
value: 38.45166604737058
- type: nauc_map_at_1_std
value: 1.9919181988552352
- type: nauc_map_at_20_diff1
value: 19.635871913149913
- type: nauc_map_at_20_max
value: 19.812838965919155
- type: nauc_map_at_20_std
value: 9.905163140101845
- type: nauc_map_at_3_diff1
value: 18.965707122532212
- type: nauc_map_at_3_max
value: 17.878860313056517
- type: nauc_map_at_3_std
value: 6.189378752019195
- type: nauc_map_at_5_diff1
value: 19.493354049675954
- type: nauc_map_at_5_max
value: 19.24527088109141
- type: nauc_map_at_5_std
value: 8.283883139680066
- type: nauc_mrr_at_1000_diff1
value: 66.87150374356781
- type: nauc_mrr_at_1000_max
value: 41.413456443203984
- type: nauc_mrr_at_1000_std
value: 4.140387282484357
- type: nauc_mrr_at_100_diff1
value: 66.87178015619061
- type: nauc_mrr_at_100_max
value: 41.419754763150834
- type: nauc_mrr_at_100_std
value: 4.15222235416704
- type: nauc_mrr_at_10_diff1
value: 66.89720586892301
- type: nauc_mrr_at_10_max
value: 41.56353878125211
- type: nauc_mrr_at_10_std
value: 4.213376519922392
- type: nauc_mrr_at_1_diff1
value: 67.7764270505257
- type: nauc_mrr_at_1_max
value: 38.45166604737058
- type: nauc_mrr_at_1_std
value: 1.9919181988552352
- type: nauc_mrr_at_20_diff1
value: 66.8714688713149
- type: nauc_mrr_at_20_max
value: 41.46170778986735
- type: nauc_mrr_at_20_std
value: 4.165154741309859
- type: nauc_mrr_at_3_diff1
value: 66.31615462679144
- type: nauc_mrr_at_3_max
value: 41.419637693259936
- type: nauc_mrr_at_3_std
value: 3.814834551396097
- type: nauc_mrr_at_5_diff1
value: 66.7289413087213
- type: nauc_mrr_at_5_max
value: 41.668346356371586
- type: nauc_mrr_at_5_std
value: 4.116331539882484
- type: nauc_ndcg_at_1000_diff1
value: 26.37325375970598
- type: nauc_ndcg_at_1000_max
value: 24.850915174721735
- type: nauc_ndcg_at_1000_std
value: 13.37585683440429
- type: nauc_ndcg_at_100_diff1
value: 25.591771178059503
- type: nauc_ndcg_at_100_max
value: 24.562820829532473
- type: nauc_ndcg_at_100_std
value: 14.093690500501541
- type: nauc_ndcg_at_10_diff1
value: 24.64600598115805
- type: nauc_ndcg_at_10_max
value: 23.543499404760023
- type: nauc_ndcg_at_10_std
value: 11.55823632781553
- type: nauc_ndcg_at_1_diff1
value: 67.7764270505257
- type: nauc_ndcg_at_1_max
value: 38.45166604737058
- type: nauc_ndcg_at_1_std
value: 1.9919181988552352
- type: nauc_ndcg_at_20_diff1
value: 24.757843275306726
- type: nauc_ndcg_at_20_max
value: 23.951154200380827
- type: nauc_ndcg_at_20_std
value: 12.931320453044886
- type: nauc_ndcg_at_3_diff1
value: 24.37742630418847
- type: nauc_ndcg_at_3_max
value: 21.310512304883723
- type: nauc_ndcg_at_3_std
value: 6.503993200818077
- type: nauc_ndcg_at_5_diff1
value: 24.813706829269716
- type: nauc_ndcg_at_5_max
value: 22.993657212898
- type: nauc_ndcg_at_5_std
value: 9.34462052506809
- type: nauc_precision_at_1000_diff1
value: -0.6506415756958156
- type: nauc_precision_at_1000_max
value: 28.039755644694875
- type: nauc_precision_at_1000_std
value: 53.46474329623814
- type: nauc_precision_at_100_diff1
value: 3.78462668236152
- type: nauc_precision_at_100_max
value: 22.501700881673862
- type: nauc_precision_at_100_std
value: 40.56672716474142
- type: nauc_precision_at_10_diff1
value: 9.156113228907534
- type: nauc_precision_at_10_max
value: 19.734206254833254
- type: nauc_precision_at_10_std
value: 19.986282545779602
- type: nauc_precision_at_1_diff1
value: 67.7764270505257
- type: nauc_precision_at_1_max
value: 38.45166604737058
- type: nauc_precision_at_1_std
value: 1.9919181988552352
- type: nauc_precision_at_20_diff1
value: 6.6164335644470125
- type: nauc_precision_at_20_max
value: 20.29343459608317
- type: nauc_precision_at_20_std
value: 26.51115475333977
- type: nauc_precision_at_3_diff1
value: 12.476520554399546
- type: nauc_precision_at_3_max
value: 16.69401409858964
- type: nauc_precision_at_3_std
value: 8.165880294907444
- type: nauc_precision_at_5_diff1
value: 11.783242828320958
- type: nauc_precision_at_5_max
value: 19.0679467875759
- type: nauc_precision_at_5_std
value: 13.615358345509884
- type: nauc_recall_at_1000_diff1
value: -0.6506415756960168
- type: nauc_recall_at_1000_max
value: 28.039755644694786
- type: nauc_recall_at_1000_std
value: 53.46474329623801
- type: nauc_recall_at_100_diff1
value: 3.7846266823613877
- type: nauc_recall_at_100_max
value: 22.501700881674008
- type: nauc_recall_at_100_std
value: 40.566727164741366
- type: nauc_recall_at_10_diff1
value: 9.15611322890755
- type: nauc_recall_at_10_max
value: 19.73420625483318
- type: nauc_recall_at_10_std
value: 19.98628254577951
- type: nauc_recall_at_1_diff1
value: 67.7764270505257
- type: nauc_recall_at_1_max
value: 38.45166604737058
- type: nauc_recall_at_1_std
value: 1.9919181988552352
- type: nauc_recall_at_20_diff1
value: 6.616433564446929
- type: nauc_recall_at_20_max
value: 20.293434596083248
- type: nauc_recall_at_20_std
value: 26.5111547533396
- type: nauc_recall_at_3_diff1
value: 12.476520554399531
- type: nauc_recall_at_3_max
value: 16.69401409858966
- type: nauc_recall_at_3_std
value: 8.165880294907438
- type: nauc_recall_at_5_diff1
value: 11.783242828320999
- type: nauc_recall_at_5_max
value: 19.067946787575845
- type: nauc_recall_at_5_std
value: 13.61535834550991
- type: ndcg_at_1
value: 78.974
- type: ndcg_at_10
value: 72.21
- type: ndcg_at_100
value: 75.264
- type: ndcg_at_1000
value: 76.259
- type: ndcg_at_20
value: 73.628
- type: ndcg_at_3
value: 67.047
- type: ndcg_at_5
value: 69.974
- type: precision_at_1
value: 78.974
- type: precision_at_10
value: 15.267
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 8.09
- type: precision_at_3
value: 43.309
- type: precision_at_5
value: 28.294000000000004
- type: recall_at_1
value: 39.487
- type: recall_at_10
value: 76.334
- type: recall_at_100
value: 88.076
- type: recall_at_1000
value: 94.59100000000001
- type: recall_at_20
value: 80.898
- type: recall_at_3
value: 64.96300000000001
- type: recall_at_5
value: 70.736
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 42.027
- type: map_at_1
value: 22.118
- type: map_at_10
value: 34.816
- type: map_at_100
value: 35.983
- type: map_at_1000
value: 36.028999999999996
- type: map_at_20
value: 35.545
- type: map_at_3
value: 30.752000000000002
- type: map_at_5
value: 33.114
- type: mrr_at_1
value: 22.793696275071635
- type: mrr_at_10
value: 35.47250079592483
- type: mrr_at_100
value: 36.576471512902856
- type: mrr_at_1000
value: 36.616205680509786
- type: mrr_at_20
value: 36.16557033864942
- type: mrr_at_3
value: 31.48758357211065
- type: mrr_at_5
value: 33.80563514804202
- type: nauc_map_at_1000_diff1
value: 32.89234100489284
- type: nauc_map_at_1000_max
value: 1.1802816553581001
- type: nauc_map_at_1000_std
value: -20.187692925732446
- type: nauc_map_at_100_diff1
value: 32.88694493681772
- type: nauc_map_at_100_max
value: 1.1732717578080365
- type: nauc_map_at_100_std
value: -20.164165529035245
- type: nauc_map_at_10_diff1
value: 32.826182211848796
- type: nauc_map_at_10_max
value: 1.1551262165737235
- type: nauc_map_at_10_std
value: -20.88326292319754
- type: nauc_map_at_1_diff1
value: 36.12732122790642
- type: nauc_map_at_1_max
value: 1.8197550109156913
- type: nauc_map_at_1_std
value: -17.205625720792167
- type: nauc_map_at_20_diff1
value: 32.83333177195551
- type: nauc_map_at_20_max
value: 1.0937431645506202
- type: nauc_map_at_20_std
value: -20.503956514646145
- type: nauc_map_at_3_diff1
value: 32.76264193805814
- type: nauc_map_at_3_max
value: 0.8560962042500389
- type: nauc_map_at_3_std
value: -20.608930717315577
- type: nauc_map_at_5_diff1
value: 32.78673238978775
- type: nauc_map_at_5_max
value: 1.0511863039329437
- type: nauc_map_at_5_std
value: -21.02164728626011
- type: nauc_mrr_at_1000_diff1
value: 32.610323934702286
- type: nauc_mrr_at_1000_max
value: 1.276669121901405
- type: nauc_mrr_at_1000_std
value: -19.908120615285043
- type: nauc_mrr_at_100_diff1
value: 32.601373758102795
- type: nauc_mrr_at_100_max
value: 1.2752735149992132
- type: nauc_mrr_at_100_std
value: -19.87937042610101
- type: nauc_mrr_at_10_diff1
value: 32.55795432078168
- type: nauc_mrr_at_10_max
value: 1.2881786969258637
- type: nauc_mrr_at_10_std
value: -20.54564519015977
- type: nauc_mrr_at_1_diff1
value: 35.596301376443726
- type: nauc_mrr_at_1_max
value: 1.7633238037306902
- type: nauc_mrr_at_1_std
value: -17.1999420019887
- type: nauc_mrr_at_20_diff1
value: 32.57185739111023
- type: nauc_mrr_at_20_max
value: 1.2212620853201877
- type: nauc_mrr_at_20_std
value: -20.179517281041264
- type: nauc_mrr_at_3_diff1
value: 32.42681377099514
- type: nauc_mrr_at_3_max
value: 0.8745921708861145
- type: nauc_mrr_at_3_std
value: -20.41017687790572
- type: nauc_mrr_at_5_diff1
value: 32.499107129648266
- type: nauc_mrr_at_5_max
value: 1.1159673851851573
- type: nauc_mrr_at_5_std
value: -20.695143502133824
- type: nauc_ndcg_at_1000_diff1
value: 32.16957965806702
- type: nauc_ndcg_at_1000_max
value: 1.6763998947980905
- type: nauc_ndcg_at_1000_std
value: -18.970592350332893
- type: nauc_ndcg_at_100_diff1
value: 31.977550102558872
- type: nauc_ndcg_at_100_max
value: 1.5625858650110014
- type: nauc_ndcg_at_100_std
value: -17.990456766123835
- type: nauc_ndcg_at_10_diff1
value: 31.82738932481356
- type: nauc_ndcg_at_10_max
value: 1.1661362042692103
- type: nauc_ndcg_at_10_std
value: -21.872680193994217
- type: nauc_ndcg_at_1_diff1
value: 35.596301376443726
- type: nauc_ndcg_at_1_max
value: 1.7633238037306902
- type: nauc_ndcg_at_1_std
value: -17.1999420019887
- type: nauc_ndcg_at_20_diff1
value: 31.749656399266264
- type: nauc_ndcg_at_20_max
value: 0.9629024493088691
- type: nauc_ndcg_at_20_std
value: -20.4379403899277
- type: nauc_ndcg_at_3_diff1
value: 31.731361436850836
- type: nauc_ndcg_at_3_max
value: 0.531749791578849
- type: nauc_ndcg_at_3_std
value: -21.551112910698674
- type: nauc_ndcg_at_5_diff1
value: 31.785373941157303
- type: nauc_ndcg_at_5_max
value: 0.86207769368333
- type: nauc_ndcg_at_5_std
value: -22.24923399160171
- type: nauc_precision_at_1000_diff1
value: -3.841288331986519
- type: nauc_precision_at_1000_max
value: 13.558041371634976
- type: nauc_precision_at_1000_std
value: 15.181510484512827
- type: nauc_precision_at_100_diff1
value: 12.441154582709053
- type: nauc_precision_at_100_max
value: 8.428136255841935
- type: nauc_precision_at_100_std
value: 14.710391839731656
- type: nauc_precision_at_10_diff1
value: 26.185854813986705
- type: nauc_precision_at_10_max
value: 1.6348387310504464
- type: nauc_precision_at_10_std
value: -23.448927004357298
- type: nauc_precision_at_1_diff1
value: 35.596301376443726
- type: nauc_precision_at_1_max
value: 1.7633238037306902
- type: nauc_precision_at_1_std
value: -17.1999420019887
- type: nauc_precision_at_20_diff1
value: 22.69194179544158
- type: nauc_precision_at_20_max
value: 1.2972015009169306
- type: nauc_precision_at_20_std
value: -15.751482380060269
- type: nauc_precision_at_3_diff1
value: 28.255531512125188
- type: nauc_precision_at_3_max
value: -0.3715575458464333
- type: nauc_precision_at_3_std
value: -24.227970454057697
- type: nauc_precision_at_5_diff1
value: 27.65497951098847
- type: nauc_precision_at_5_max
value: 0.449773375292472
- type: nauc_precision_at_5_std
value: -25.37445450938601
- type: nauc_recall_at_1000_diff1
value: 15.243948516763819
- type: nauc_recall_at_1000_max
value: 41.821227805251375
- type: nauc_recall_at_1000_std
value: 61.66297794838101
- type: nauc_recall_at_100_diff1
value: 24.516543685029994
- type: nauc_recall_at_100_max
value: 7.093972966253228
- type: nauc_recall_at_100_std
value: 17.244452321212282
- type: nauc_recall_at_10_diff1
value: 28.404243095182828
- type: nauc_recall_at_10_max
value: 1.0805210480930945
- type: nauc_recall_at_10_std
value: -24.885018657039527
- type: nauc_recall_at_1_diff1
value: 36.12732122790642
- type: nauc_recall_at_1_max
value: 1.8197550109156913
- type: nauc_recall_at_1_std
value: -17.205625720792167
- type: nauc_recall_at_20_diff1
value: 26.956250169438512
- type: nauc_recall_at_20_max
value: 0.023973408161285917
- type: nauc_recall_at_20_std
value: -18.32944444428131
- type: nauc_recall_at_3_diff1
value: 28.9894205130054
- type: nauc_recall_at_3_max
value: -0.36140658021466865
- type: nauc_recall_at_3_std
value: -24.022505107768364
- type: nauc_recall_at_5_diff1
value: 28.907023434955104
- type: nauc_recall_at_5_max
value: 0.2501037567297729
- type: nauc_recall_at_5_std
value: -25.719919602271496
- type: ndcg_at_1
value: 22.794
- type: ndcg_at_10
value: 42.027
- type: ndcg_at_100
value: 47.601
- type: ndcg_at_1000
value: 48.713
- type: ndcg_at_20
value: 44.623000000000005
- type: ndcg_at_3
value: 33.772999999999996
- type: ndcg_at_5
value: 37.991
- type: precision_at_1
value: 22.794
- type: precision_at_10
value: 6.711
- type: precision_at_100
value: 0.9490000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.8920000000000003
- type: precision_at_3
value: 14.46
- type: precision_at_5
value: 10.822
- type: recall_at_1
value: 22.118
- type: recall_at_10
value: 64.201
- type: recall_at_100
value: 89.878
- type: recall_at_1000
value: 98.259
- type: recall_at_20
value: 74.34100000000001
- type: recall_at_3
value: 41.8
- type: recall_at_5
value: 51.959
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 36.201
- type: map_at_1
value: 5.654
- type: map_at_10
value: 13.402
- type: map_at_100
value: 16.849
- type: map_at_1000
value: 18.264
- type: map_at_20
value: 14.832
- type: map_at_3
value: 9.619
- type: map_at_5
value: 11.483
- type: mrr_at_1
value: 47.6780185758514
- type: mrr_at_10
value: 56.47906531033466
- type: mrr_at_100
value: 57.04539749991402
- type: mrr_at_1000
value: 57.08810157607369
- type: mrr_at_20
value: 56.88003170105462
- type: mrr_at_3
value: 54.43756449948401
- type: mrr_at_5
value: 55.660474716202266
- type: nauc_map_at_1000_diff1
value: 31.134615238698192
- type: nauc_map_at_1000_max
value: 36.09522002487132
- type: nauc_map_at_1000_std
value: 14.72627666649002
- type: nauc_map_at_100_diff1
value: 32.777473351864444
- type: nauc_map_at_100_max
value: 35.25391471621035
- type: nauc_map_at_100_std
value: 12.024428973861083
- type: nauc_map_at_10_diff1
value: 36.46466466148528
- type: nauc_map_at_10_max
value: 29.707805406826722
- type: nauc_map_at_10_std
value: 2.0678757794226335
- type: nauc_map_at_1_diff1
value: 54.30208426149679
- type: nauc_map_at_1_max
value: 18.69125148481608
- type: nauc_map_at_1_std
value: -8.970955660291802
- type: nauc_map_at_20_diff1
value: 34.76513311600623
- type: nauc_map_at_20_max
value: 32.20666003570514
- type: nauc_map_at_20_std
value: 5.924889441518581
- type: nauc_map_at_3_diff1
value: 45.73465176835491
- type: nauc_map_at_3_max
value: 23.492291524989106
- type: nauc_map_at_3_std
value: -5.0123536561688855
- type: nauc_map_at_5_diff1
value: 39.7128319374107
- type: nauc_map_at_5_max
value: 25.84231729559691
- type: nauc_map_at_5_std
value: -2.0861428981140344
- type: nauc_mrr_at_1000_diff1
value: 33.0997881703397
- type: nauc_mrr_at_1000_max
value: 52.7089709923531
- type: nauc_mrr_at_1000_std
value: 28.8517952674151
- type: nauc_mrr_at_100_diff1
value: 33.1094984027438
- type: nauc_mrr_at_100_max
value: 52.74301398138847
- type: nauc_mrr_at_100_std
value: 28.897997840300892
- type: nauc_mrr_at_10_diff1
value: 33.300713655464925
- type: nauc_mrr_at_10_max
value: 52.572139698742184
- type: nauc_mrr_at_10_std
value: 28.66875615527188
- type: nauc_mrr_at_1_diff1
value: 32.57632582147155
- type: nauc_mrr_at_1_max
value: 46.020072246328816
- type: nauc_mrr_at_1_std
value: 20.99097889820076
- type: nauc_mrr_at_20_diff1
value: 33.04083904518949
- type: nauc_mrr_at_20_max
value: 52.597451362456994
- type: nauc_mrr_at_20_std
value: 28.681527293587898
- type: nauc_mrr_at_3_diff1
value: 33.64864656322754
- type: nauc_mrr_at_3_max
value: 51.82256412011279
- type: nauc_mrr_at_3_std
value: 27.241260746740686
- type: nauc_mrr_at_5_diff1
value: 33.53201325467246
- type: nauc_mrr_at_5_max
value: 52.79440885773516
- type: nauc_mrr_at_5_std
value: 28.663081392086028
- type: nauc_ndcg_at_1000_diff1
value: 28.632650542040714
- type: nauc_ndcg_at_1000_max
value: 51.24103069835822
- type: nauc_ndcg_at_1000_std
value: 35.05503784757999
- type: nauc_ndcg_at_100_diff1
value: 29.082177715298503
- type: nauc_ndcg_at_100_max
value: 45.24750203464315
- type: nauc_ndcg_at_100_std
value: 27.146548925680914
- type: nauc_ndcg_at_10_diff1
value: 25.123554466093594
- type: nauc_ndcg_at_10_max
value: 42.74355537806512
- type: nauc_ndcg_at_10_std
value: 22.234407997803935
- type: nauc_ndcg_at_1_diff1
value: 33.75083940012058
- type: nauc_ndcg_at_1_max
value: 44.44319402133161
- type: nauc_ndcg_at_1_std
value: 19.146499358406487
- type: nauc_ndcg_at_20_diff1
value: 24.954207968331872
- type: nauc_ndcg_at_20_max
value: 41.25991844405748
- type: nauc_ndcg_at_20_std
value: 22.169009285868864
- type: nauc_ndcg_at_3_diff1
value: 28.186539942033516
- type: nauc_ndcg_at_3_max
value: 44.40790009754965
- type: nauc_ndcg_at_3_std
value: 20.99226576085115
- type: nauc_ndcg_at_5_diff1
value: 25.498387899376706
- type: nauc_ndcg_at_5_max
value: 43.174709766261316
- type: nauc_ndcg_at_5_std
value: 21.88111962672031
- type: nauc_precision_at_1000_diff1
value: -16.22321012507648
- type: nauc_precision_at_1000_max
value: 5.808852256649677
- type: nauc_precision_at_1000_std
value: 19.875641776698824
- type: nauc_precision_at_100_diff1
value: -10.248089374355486
- type: nauc_precision_at_100_max
value: 19.29065415127588
- type: nauc_precision_at_100_std
value: 31.75019665627339
- type: nauc_precision_at_10_diff1
value: 3.6783257583955056
- type: nauc_precision_at_10_max
value: 39.22286010695767
- type: nauc_precision_at_10_std
value: 31.225485732801022
- type: nauc_precision_at_1_diff1
value: 32.57632582147155
- type: nauc_precision_at_1_max
value: 46.020072246328816
- type: nauc_precision_at_1_std
value: 20.99097889820076
- type: nauc_precision_at_20_diff1
value: -3.1632510833242784
- type: nauc_precision_at_20_max
value: 31.575496762405734
- type: nauc_precision_at_20_std
value: 31.576283324468115
- type: nauc_precision_at_3_diff1
value: 17.78864585545647
- type: nauc_precision_at_3_max
value: 44.201289661125585
- type: nauc_precision_at_3_std
value: 25.447840649726693
- type: nauc_precision_at_5_diff1
value: 9.986748662091358
- type: nauc_precision_at_5_max
value: 41.214164860776755
- type: nauc_precision_at_5_std
value: 28.22551704127726
- type: nauc_recall_at_1000_diff1
value: 10.984331766850506
- type: nauc_recall_at_1000_max
value: 24.641216018034104
- type: nauc_recall_at_1000_std
value: 26.91064221008446
- type: nauc_recall_at_100_diff1
value: 23.7009352078473
- type: nauc_recall_at_100_max
value: 30.176031609451297
- type: nauc_recall_at_100_std
value: 20.360365243211564
- type: nauc_recall_at_10_diff1
value: 28.11831737650638
- type: nauc_recall_at_10_max
value: 24.21539670487414
- type: nauc_recall_at_10_std
value: 2.245504974150148
- type: nauc_recall_at_1_diff1
value: 54.30208426149679
- type: nauc_recall_at_1_max
value: 18.69125148481608
- type: nauc_recall_at_1_std
value: -8.970955660291802
- type: nauc_recall_at_20_diff1
value: 26.199425305139908
- type: nauc_recall_at_20_max
value: 24.66704097503736
- type: nauc_recall_at_20_std
value: 5.86052107206246
- type: nauc_recall_at_3_diff1
value: 42.88348677575622
- type: nauc_recall_at_3_max
value: 21.189371077603308
- type: nauc_recall_at_3_std
value: -4.537510127238226
- type: nauc_recall_at_5_diff1
value: 30.7936756722569
- type: nauc_recall_at_5_max
value: 21.06136406164962
- type: nauc_recall_at_5_std
value: -1.4113804735229794
- type: ndcg_at_1
value: 45.975
- type: ndcg_at_10
value: 36.201
- type: ndcg_at_100
value: 32.736
- type: ndcg_at_1000
value: 41.099000000000004
- type: ndcg_at_20
value: 33.724
- type: ndcg_at_3
value: 42.242000000000004
- type: ndcg_at_5
value: 40.137
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 26.904
- type: precision_at_100
value: 8.368
- type: precision_at_1000
value: 2.078
- type: precision_at_20
value: 19.845
- type: precision_at_3
value: 40.351
- type: precision_at_5
value: 35.108
- type: recall_at_1
value: 5.654
- type: recall_at_10
value: 17.793
- type: recall_at_100
value: 32.483000000000004
- type: recall_at_1000
value: 63.294
- type: recall_at_20
value: 21.754
- type: recall_at_3
value: 10.771
- type: recall_at_5
value: 14.084
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 62.464
- type: map_at_1
value: 38.0
- type: map_at_10
value: 54.806
- type: map_at_100
value: 55.599
- type: map_at_1000
value: 55.617000000000004
- type: map_at_20
value: 55.336
- type: map_at_3
value: 50.58200000000001
- type: map_at_5
value: 53.181
- type: mrr_at_1
value: 42.46813441483198
- type: mrr_at_10
value: 57.060710147326446
- type: mrr_at_100
value: 57.60978373431328
- type: mrr_at_1000
value: 57.62192762809547
- type: mrr_at_20
value: 57.43431796174232
- type: mrr_at_3
value: 53.78041714947835
- type: mrr_at_5
value: 55.81257242178437
- type: nauc_map_at_1000_diff1
value: 38.337572188308194
- type: nauc_map_at_1000_max
value: 27.550035254787197
- type: nauc_map_at_1000_std
value: -7.5513729587308145
- type: nauc_map_at_100_diff1
value: 38.335337794455015
- type: nauc_map_at_100_max
value: 27.56919614414171
- type: nauc_map_at_100_std
value: -7.526017855405723
- type: nauc_map_at_10_diff1
value: 38.308131361353816
- type: nauc_map_at_10_max
value: 27.691849580929933
- type: nauc_map_at_10_std
value: -7.971461731555123
- type: nauc_map_at_1_diff1
value: 42.721072690634884
- type: nauc_map_at_1_max
value: 21.750451486885332
- type: nauc_map_at_1_std
value: -9.99540950522643
- type: nauc_map_at_20_diff1
value: 38.25792874982169
- type: nauc_map_at_20_max
value: 27.68877906159661
- type: nauc_map_at_20_std
value: -7.560753583212102
- type: nauc_map_at_3_diff1
value: 37.950570055936254
- type: nauc_map_at_3_max
value: 26.257969511794858
- type: nauc_map_at_3_std
value: -9.236868658300553
- type: nauc_map_at_5_diff1
value: 37.99893219450212
- type: nauc_map_at_5_max
value: 27.293454259158057
- type: nauc_map_at_5_std
value: -8.734089449603806
- type: nauc_mrr_at_1000_diff1
value: 37.777767467474774
- type: nauc_mrr_at_1000_max
value: 27.39507603748298
- type: nauc_mrr_at_1000_std
value: -5.554754076870114
- type: nauc_mrr_at_100_diff1
value: 37.77981674583538
- type: nauc_mrr_at_100_max
value: 27.411100989441557
- type: nauc_mrr_at_100_std
value: -5.539061231412731
- type: nauc_mrr_at_10_diff1
value: 37.72399003363479
- type: nauc_mrr_at_10_max
value: 27.618142546685416
- type: nauc_mrr_at_10_std
value: -5.6819843907448195
- type: nauc_mrr_at_1_diff1
value: 41.17596078958236
- type: nauc_mrr_at_1_max
value: 23.32588591818617
- type: nauc_mrr_at_1_std
value: -7.126628034623689
- type: nauc_mrr_at_20_diff1
value: 37.695136721588
- type: nauc_mrr_at_20_max
value: 27.52850676467322
- type: nauc_mrr_at_20_std
value: -5.50667995515647
- type: nauc_mrr_at_3_diff1
value: 37.23845700908964
- type: nauc_mrr_at_3_max
value: 26.69389772971012
- type: nauc_mrr_at_3_std
value: -6.31868405989011
- type: nauc_mrr_at_5_diff1
value: 37.33757394192838
- type: nauc_mrr_at_5_max
value: 27.42091593836207
- type: nauc_mrr_at_5_std
value: -5.993243330132065
- type: nauc_ndcg_at_1000_diff1
value: 37.74836061640332
- type: nauc_ndcg_at_1000_max
value: 29.03148916289089
- type: nauc_ndcg_at_1000_std
value: -5.543065770074502
- type: nauc_ndcg_at_100_diff1
value: 37.75593955089626
- type: nauc_ndcg_at_100_max
value: 29.67109480272493
- type: nauc_ndcg_at_100_std
value: -4.773697596687493
- type: nauc_ndcg_at_10_diff1
value: 37.41701174824348
- type: nauc_ndcg_at_10_max
value: 30.448703434043445
- type: nauc_ndcg_at_10_std
value: -6.306202666419071
- type: nauc_ndcg_at_1_diff1
value: 41.17596078958236
- type: nauc_ndcg_at_1_max
value: 23.32588591818617
- type: nauc_ndcg_at_1_std
value: -7.126628034623689
- type: nauc_ndcg_at_20_diff1
value: 37.17445197824622
- type: nauc_ndcg_at_20_max
value: 30.47378561555209
- type: nauc_ndcg_at_20_std
value: -4.921584853993488
- type: nauc_ndcg_at_3_diff1
value: 36.5261976812068
- type: nauc_ndcg_at_3_max
value: 27.560538820208926
- type: nauc_ndcg_at_3_std
value: -8.556686332882931
- type: nauc_ndcg_at_5_diff1
value: 36.571462759614526
- type: nauc_ndcg_at_5_max
value: 29.363401730752585
- type: nauc_ndcg_at_5_std
value: -7.825739170420347
- type: nauc_precision_at_1000_diff1
value: -12.588899483401223
- type: nauc_precision_at_1000_max
value: 2.641097890578701
- type: nauc_precision_at_1000_std
value: 17.643107625788748
- type: nauc_precision_at_100_diff1
value: -8.40579874206785
- type: nauc_precision_at_100_max
value: 9.725496771040037
- type: nauc_precision_at_100_std
value: 21.558582760191243
- type: nauc_precision_at_10_diff1
value: 6.619157191854486
- type: nauc_precision_at_10_max
value: 23.767406373688402
- type: nauc_precision_at_10_std
value: 10.428535003478808
- type: nauc_precision_at_1_diff1
value: 41.17596078958236
- type: nauc_precision_at_1_max
value: 23.32588591818617
- type: nauc_precision_at_1_std
value: -7.126628034623689
- type: nauc_precision_at_20_diff1
value: -0.6449974218292859
- type: nauc_precision_at_20_max
value: 20.211503851418783
- type: nauc_precision_at_20_std
value: 17.922745410142575
- type: nauc_precision_at_3_diff1
value: 19.710276097428657
- type: nauc_precision_at_3_max
value: 26.768918044758706
- type: nauc_precision_at_3_std
value: -1.0636448912049246
- type: nauc_precision_at_5_diff1
value: 13.073181337982613
- type: nauc_precision_at_5_max
value: 26.418340338971024
- type: nauc_precision_at_5_std
value: 2.9842078949528688
- type: nauc_recall_at_1000_diff1
value: 30.52411148739828
- type: nauc_recall_at_1000_max
value: 90.96409807536762
- type: nauc_recall_at_1000_std
value: 83.94857830921949
- type: nauc_recall_at_100_diff1
value: 36.936303690592155
- type: nauc_recall_at_100_max
value: 71.91515014325869
- type: nauc_recall_at_100_std
value: 48.93061263403371
- type: nauc_recall_at_10_diff1
value: 32.84292362076269
- type: nauc_recall_at_10_max
value: 44.27252783122478
- type: nauc_recall_at_10_std
value: -1.5981198975612385
- type: nauc_recall_at_1_diff1
value: 42.721072690634884
- type: nauc_recall_at_1_max
value: 21.750451486885332
- type: nauc_recall_at_1_std
value: -9.99540950522643
- type: nauc_recall_at_20_diff1
value: 29.36724417081702
- type: nauc_recall_at_20_max
value: 52.035846390214715
- type: nauc_recall_at_20_std
value: 11.967264191332818
- type: nauc_recall_at_3_diff1
value: 31.634923771936098
- type: nauc_recall_at_3_max
value: 30.225743369869473
- type: nauc_recall_at_3_std
value: -9.253665347118615
- type: nauc_recall_at_5_diff1
value: 30.66271853090737
- type: nauc_recall_at_5_max
value: 35.70815715994996
- type: nauc_recall_at_5_std
value: -7.836012956078996
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 62.464
- type: ndcg_at_100
value: 65.618
- type: ndcg_at_1000
value: 66.014
- type: ndcg_at_20
value: 64.12
- type: ndcg_at_3
value: 54.790000000000006
- type: ndcg_at_5
value: 58.992
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.959
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.380999999999999
- type: precision_at_3
value: 24.73
- type: precision_at_5
value: 17.299999999999997
- type: recall_at_1
value: 38.0
- type: recall_at_10
value: 83.22699999999999
- type: recall_at_100
value: 96.584
- type: recall_at_1000
value: 99.512
- type: recall_at_20
value: 89.291
- type: recall_at_3
value: 63.666
- type: recall_at_5
value: 73.27900000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 87.366
- type: map_at_1
value: 69.95700000000001
- type: map_at_10
value: 83.55
- type: map_at_100
value: 84.196
- type: map_at_1000
value: 84.21600000000001
- type: map_at_20
value: 83.982
- type: map_at_3
value: 80.647
- type: map_at_5
value: 82.443
- type: mrr_at_1
value: 80.39
- type: mrr_at_10
value: 86.65646031746004
- type: mrr_at_100
value: 86.7852113210373
- type: mrr_at_1000
value: 86.78651118354796
- type: mrr_at_20
value: 86.75772838878498
- type: mrr_at_3
value: 85.67499999999971
- type: mrr_at_5
value: 86.33749999999962
- type: nauc_map_at_1000_diff1
value: 76.68189702770007
- type: nauc_map_at_1000_max
value: 36.19988239025682
- type: nauc_map_at_1000_std
value: -26.231691135645736
- type: nauc_map_at_100_diff1
value: 76.68832712120171
- type: nauc_map_at_100_max
value: 36.18627717337547
- type: nauc_map_at_100_std
value: -26.28243886166
- type: nauc_map_at_10_diff1
value: 76.88888516032657
- type: nauc_map_at_10_max
value: 35.69809861085124
- type: nauc_map_at_10_std
value: -27.859425473864224
- type: nauc_map_at_1_diff1
value: 79.5243725217315
- type: nauc_map_at_1_max
value: 27.092773841207002
- type: nauc_map_at_1_std
value: -26.223200911204543
- type: nauc_map_at_20_diff1
value: 76.74938996155176
- type: nauc_map_at_20_max
value: 36.07373781351406
- type: nauc_map_at_20_std
value: -26.891400098628015
- type: nauc_map_at_3_diff1
value: 77.29604745045076
- type: nauc_map_at_3_max
value: 33.11431059356283
- type: nauc_map_at_3_std
value: -29.555237195931085
- type: nauc_map_at_5_diff1
value: 77.14069217901078
- type: nauc_map_at_5_max
value: 34.68656073526487
- type: nauc_map_at_5_std
value: -28.945053669861508
- type: nauc_mrr_at_1000_diff1
value: 76.66087451567746
- type: nauc_mrr_at_1000_max
value: 38.78133177265328
- type: nauc_mrr_at_1000_std
value: -23.75726541774991
- type: nauc_mrr_at_100_diff1
value: 76.66117078261013
- type: nauc_mrr_at_100_max
value: 38.782533036423885
- type: nauc_mrr_at_100_std
value: -23.752587601473568
- type: nauc_mrr_at_10_diff1
value: 76.65866401411019
- type: nauc_mrr_at_10_max
value: 38.87950311049704
- type: nauc_mrr_at_10_std
value: -23.873660706680578
- type: nauc_mrr_at_1_diff1
value: 77.42633506487041
- type: nauc_mrr_at_1_max
value: 37.93973722217786
- type: nauc_mrr_at_1_std
value: -23.3984130771317
- type: nauc_mrr_at_20_diff1
value: 76.66210684923414
- type: nauc_mrr_at_20_max
value: 38.81293033048911
- type: nauc_mrr_at_20_std
value: -23.736590746133736
- type: nauc_mrr_at_3_diff1
value: 76.33711764736019
- type: nauc_mrr_at_3_max
value: 38.5659231830368
- type: nauc_mrr_at_3_std
value: -23.99588149124865
- type: nauc_mrr_at_5_diff1
value: 76.57123830226054
- type: nauc_mrr_at_5_max
value: 38.97947097392977
- type: nauc_mrr_at_5_std
value: -23.943668957974246
- type: nauc_ndcg_at_1000_diff1
value: 76.38447339050585
- type: nauc_ndcg_at_1000_max
value: 37.756822792877934
- type: nauc_ndcg_at_1000_std
value: -24.046995734357164
- type: nauc_ndcg_at_100_diff1
value: 76.44058018066822
- type: nauc_ndcg_at_100_max
value: 37.72948294169218
- type: nauc_ndcg_at_100_std
value: -24.083432140741795
- type: nauc_ndcg_at_10_diff1
value: 76.56246287923074
- type: nauc_ndcg_at_10_max
value: 37.0329253490553
- type: nauc_ndcg_at_10_std
value: -26.6495163705961
- type: nauc_ndcg_at_1_diff1
value: 77.4085129990432
- type: nauc_ndcg_at_1_max
value: 38.06139172214421
- type: nauc_ndcg_at_1_std
value: -23.656477126977386
- type: nauc_ndcg_at_20_diff1
value: 76.50192496743098
- type: nauc_ndcg_at_20_max
value: 37.51759311013985
- type: nauc_ndcg_at_20_std
value: -25.45517058360004
- type: nauc_ndcg_at_3_diff1
value: 75.94398494081794
- type: nauc_ndcg_at_3_max
value: 35.7666711547279
- type: nauc_ndcg_at_3_std
value: -26.866022682361578
- type: nauc_ndcg_at_5_diff1
value: 76.47334274088344
- type: nauc_ndcg_at_5_max
value: 36.40830331490731
- type: nauc_ndcg_at_5_std
value: -27.170121189572765
- type: nauc_precision_at_1000_diff1
value: -43.33672630765437
- type: nauc_precision_at_1000_max
value: -5.089751329149161
- type: nauc_precision_at_1000_std
value: 30.6241447847051
- type: nauc_precision_at_100_diff1
value: -42.736833035629864
- type: nauc_precision_at_100_max
value: -4.060198408346224
- type: nauc_precision_at_100_std
value: 29.807050266205344
- type: nauc_precision_at_10_diff1
value: -35.90810562245906
- type: nauc_precision_at_10_max
value: 1.1633204529249133
- type: nauc_precision_at_10_std
value: 20.129691203276018
- type: nauc_precision_at_1_diff1
value: 77.4085129990432
- type: nauc_precision_at_1_max
value: 38.06139172214421
- type: nauc_precision_at_1_std
value: -23.656477126977386
- type: nauc_precision_at_20_diff1
value: -40.2132286912738
- type: nauc_precision_at_20_max
value: -1.3004735030734194
- type: nauc_precision_at_20_std
value: 25.15612293757488
- type: nauc_precision_at_3_diff1
value: -13.873825299883904
- type: nauc_precision_at_3_max
value: 11.038689278907233
- type: nauc_precision_at_3_std
value: 5.4276449621706
- type: nauc_precision_at_5_diff1
value: -27.151668633894737
- type: nauc_precision_at_5_max
value: 5.795130010163115
- type: nauc_precision_at_5_std
value: 13.220722167587375
- type: nauc_recall_at_1000_diff1
value: 83.903950427863
- type: nauc_recall_at_1000_max
value: 37.82919000897223
- type: nauc_recall_at_1000_std
value: 70.65670846771707
- type: nauc_recall_at_100_diff1
value: 75.23306095335836
- type: nauc_recall_at_100_max
value: 37.54281648247423
- type: nauc_recall_at_100_std
value: 8.434289114377373
- type: nauc_recall_at_10_diff1
value: 72.7872912723047
- type: nauc_recall_at_10_max
value: 34.261519652104184
- type: nauc_recall_at_10_std
value: -34.60101950810808
- type: nauc_recall_at_1_diff1
value: 79.5243725217315
- type: nauc_recall_at_1_max
value: 27.092773841207002
- type: nauc_recall_at_1_std
value: -26.223200911204543
- type: nauc_recall_at_20_diff1
value: 72.8297963091964
- type: nauc_recall_at_20_max
value: 36.070220569670916
- type: nauc_recall_at_20_std
value: -27.20897179168245
- type: nauc_recall_at_3_diff1
value: 73.47456374650459
- type: nauc_recall_at_3_max
value: 29.901663407294816
- type: nauc_recall_at_3_std
value: -32.83329537040381
- type: nauc_recall_at_5_diff1
value: 73.05025750827126
- type: nauc_recall_at_5_max
value: 32.35733470860963
- type: nauc_recall_at_5_std
value: -34.32357558493091
- type: ndcg_at_1
value: 80.4
- type: ndcg_at_10
value: 87.366
- type: ndcg_at_100
value: 88.7
- type: ndcg_at_1000
value: 88.842
- type: ndcg_at_20
value: 88.11
- type: ndcg_at_3
value: 84.52499999999999
- type: ndcg_at_5
value: 86.047
- type: precision_at_1
value: 80.4
- type: precision_at_10
value: 13.235
- type: precision_at_100
value: 1.516
- type: precision_at_1000
value: 0.156
- type: precision_at_20
value: 7.037
- type: precision_at_3
value: 36.9
- type: precision_at_5
value: 24.236
- type: recall_at_1
value: 69.95700000000001
- type: recall_at_10
value: 94.535
- type: recall_at_100
value: 99.164
- type: recall_at_1000
value: 99.855
- type: recall_at_20
value: 96.974
- type: recall_at_3
value: 86.33800000000001
- type: recall_at_5
value: 90.69
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 21.492
- type: map_at_1
value: 5.192
- type: map_at_10
value: 12.959000000000001
- type: map_at_100
value: 14.963999999999999
- type: map_at_1000
value: 15.261
- type: map_at_20
value: 13.988999999999999
- type: map_at_3
value: 9.235
- type: map_at_5
value: 11.042
- type: mrr_at_1
value: 25.5
- type: mrr_at_10
value: 36.37313492063491
- type: mrr_at_100
value: 37.36517957347626
- type: mrr_at_1000
value: 37.42538601073437
- type: mrr_at_20
value: 36.987896404421136
- type: mrr_at_3
value: 32.966666666666654
- type: mrr_at_5
value: 34.95166666666664
- type: nauc_map_at_1000_diff1
value: 13.635120934154395
- type: nauc_map_at_1000_max
value: 28.03542983005195
- type: nauc_map_at_1000_std
value: 17.07156940311778
- type: nauc_map_at_100_diff1
value: 13.59237295184475
- type: nauc_map_at_100_max
value: 27.992291365051237
- type: nauc_map_at_100_std
value: 16.926533467400464
- type: nauc_map_at_10_diff1
value: 14.149193235999993
- type: nauc_map_at_10_max
value: 26.520643811139305
- type: nauc_map_at_10_std
value: 13.168673602548925
- type: nauc_map_at_1_diff1
value: 20.096094508148465
- type: nauc_map_at_1_max
value: 17.41582245576302
- type: nauc_map_at_1_std
value: 5.771729007558897
- type: nauc_map_at_20_diff1
value: 13.977726400526427
- type: nauc_map_at_20_max
value: 27.2322235491895
- type: nauc_map_at_20_std
value: 14.972781677750435
- type: nauc_map_at_3_diff1
value: 17.371153027460355
- type: nauc_map_at_3_max
value: 24.457758503208254
- type: nauc_map_at_3_std
value: 7.719726821179824
- type: nauc_map_at_5_diff1
value: 14.600442843442574
- type: nauc_map_at_5_max
value: 25.899736370856296
- type: nauc_map_at_5_std
value: 10.125349354853359
- type: nauc_mrr_at_1000_diff1
value: 18.70342821390236
- type: nauc_mrr_at_1000_max
value: 23.365194520549114
- type: nauc_mrr_at_1000_std
value: 12.185114294903236
- type: nauc_mrr_at_100_diff1
value: 18.677858738015907
- type: nauc_mrr_at_100_max
value: 23.372641996726742
- type: nauc_mrr_at_100_std
value: 12.216130561991909
- type: nauc_mrr_at_10_diff1
value: 18.79094453090232
- type: nauc_mrr_at_10_max
value: 23.511686337006466
- type: nauc_mrr_at_10_std
value: 11.879716687008134
- type: nauc_mrr_at_1_diff1
value: 20.10455171810408
- type: nauc_mrr_at_1_max
value: 17.741566234315428
- type: nauc_mrr_at_1_std
value: 6.1676764583652215
- type: nauc_mrr_at_20_diff1
value: 18.70143648544655
- type: nauc_mrr_at_20_max
value: 23.45603239095019
- type: nauc_mrr_at_20_std
value: 12.244613576686202
- type: nauc_mrr_at_3_diff1
value: 18.894662528857374
- type: nauc_mrr_at_3_max
value: 23.3739038101588
- type: nauc_mrr_at_3_std
value: 10.4709044796543
- type: nauc_mrr_at_5_diff1
value: 18.877786065095563
- type: nauc_mrr_at_5_max
value: 23.78061081203872
- type: nauc_mrr_at_5_std
value: 11.847882917869622
- type: nauc_ndcg_at_1000_diff1
value: 13.99159027398115
- type: nauc_ndcg_at_1000_max
value: 29.44766808611483
- type: nauc_ndcg_at_1000_std
value: 24.289749574699915
- type: nauc_ndcg_at_100_diff1
value: 13.164020363258746
- type: nauc_ndcg_at_100_max
value: 29.642442997167723
- type: nauc_ndcg_at_100_std
value: 23.761764515453866
- type: nauc_ndcg_at_10_diff1
value: 14.839883268638546
- type: nauc_ndcg_at_10_max
value: 27.21043708455449
- type: nauc_ndcg_at_10_std
value: 15.56110419291775
- type: nauc_ndcg_at_1_diff1
value: 20.10455171810408
- type: nauc_ndcg_at_1_max
value: 17.741566234315428
- type: nauc_ndcg_at_1_std
value: 6.1676764583652215
- type: nauc_ndcg_at_20_diff1
value: 14.27998110295395
- type: nauc_ndcg_at_20_max
value: 28.2492026337839
- type: nauc_ndcg_at_20_std
value: 18.822356982979105
- type: nauc_ndcg_at_3_diff1
value: 17.659263157535445
- type: nauc_ndcg_at_3_max
value: 25.416706421591396
- type: nauc_ndcg_at_3_std
value: 9.650689638152636
- type: nauc_ndcg_at_5_diff1
value: 15.38459833918123
- type: nauc_ndcg_at_5_max
value: 26.92495519416969
- type: nauc_ndcg_at_5_std
value: 12.71017696809276
- type: nauc_precision_at_1000_diff1
value: 6.128490135458364
- type: nauc_precision_at_1000_max
value: 23.52693893261883
- type: nauc_precision_at_1000_std
value: 36.280432732819925
- type: nauc_precision_at_100_diff1
value: 5.306163791220436
- type: nauc_precision_at_100_max
value: 27.67851033239246
- type: nauc_precision_at_100_std
value: 34.29821573752515
- type: nauc_precision_at_10_diff1
value: 10.829686435425472
- type: nauc_precision_at_10_max
value: 27.201648684015318
- type: nauc_precision_at_10_std
value: 19.376999508233254
- type: nauc_precision_at_1_diff1
value: 20.10455171810408
- type: nauc_precision_at_1_max
value: 17.741566234315428
- type: nauc_precision_at_1_std
value: 6.1676764583652215
- type: nauc_precision_at_20_diff1
value: 9.416169626702048
- type: nauc_precision_at_20_max
value: 27.65257998670333
- type: nauc_precision_at_20_std
value: 24.761868509805826
- type: nauc_precision_at_3_diff1
value: 16.666456902017348
- type: nauc_precision_at_3_max
value: 27.9969730961105
- type: nauc_precision_at_3_std
value: 10.991562741393231
- type: nauc_precision_at_5_diff1
value: 12.26205064462843
- type: nauc_precision_at_5_max
value: 29.083848730874095
- type: nauc_precision_at_5_std
value: 15.66630836555747
- type: nauc_recall_at_1000_diff1
value: 5.600277836894063
- type: nauc_recall_at_1000_max
value: 23.228705161815526
- type: nauc_recall_at_1000_std
value: 36.822431061799485
- type: nauc_recall_at_100_diff1
value: 4.991781244867178
- type: nauc_recall_at_100_max
value: 27.70095625483475
- type: nauc_recall_at_100_std
value: 34.67168431597854
- type: nauc_recall_at_10_diff1
value: 10.580860425931972
- type: nauc_recall_at_10_max
value: 27.145829414223666
- type: nauc_recall_at_10_std
value: 19.330630157067382
- type: nauc_recall_at_1_diff1
value: 20.096094508148465
- type: nauc_recall_at_1_max
value: 17.41582245576302
- type: nauc_recall_at_1_std
value: 5.771729007558897
- type: nauc_recall_at_20_diff1
value: 9.06945331260344
- type: nauc_recall_at_20_max
value: 27.56725251066482
- type: nauc_recall_at_20_std
value: 24.77644509886098
- type: nauc_recall_at_3_diff1
value: 16.660507676429322
- type: nauc_recall_at_3_max
value: 27.816546386536434
- type: nauc_recall_at_3_std
value: 10.687824478247007
- type: nauc_recall_at_5_diff1
value: 11.992514446369388
- type: nauc_recall_at_5_max
value: 28.789031176671948
- type: nauc_recall_at_5_std
value: 15.422118990090805
- type: ndcg_at_1
value: 25.5
- type: ndcg_at_10
value: 21.492
- type: ndcg_at_100
value: 29.022
- type: ndcg_at_1000
value: 34.298
- type: ndcg_at_20
value: 24.237000000000002
- type: ndcg_at_3
value: 20.392
- type: ndcg_at_5
value: 17.801000000000002
- type: precision_at_1
value: 25.5
- type: precision_at_10
value: 11.09
- type: precision_at_100
value: 2.1919999999999997
- type: precision_at_1000
value: 0.346
- type: precision_at_20
value: 7.135
- type: precision_at_3
value: 18.933
- type: precision_at_5
value: 15.52
- type: recall_at_1
value: 5.192
- type: recall_at_10
value: 22.512999999999998
- type: recall_at_100
value: 44.505
- type: recall_at_1000
value: 70.267
- type: recall_at_20
value: 28.965000000000003
- type: recall_at_3
value: 11.522
- type: recall_at_5
value: 15.751999999999999
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 71.586
- type: map_at_1
value: 56.760999999999996
- type: map_at_10
value: 66.893
- type: map_at_100
value: 67.42
- type: map_at_1000
value: 67.44200000000001
- type: map_at_20
value: 67.232
- type: map_at_3
value: 64.193
- type: map_at_5
value: 65.73400000000001
- type: mrr_at_1
value: 60.0
- type: mrr_at_10
value: 68.20383597883595
- type: mrr_at_100
value: 68.58867453733343
- type: mrr_at_1000
value: 68.61117469977329
- type: mrr_at_20
value: 68.43973740684265
- type: mrr_at_3
value: 66.11111111111111
- type: mrr_at_5
value: 67.44444444444446
- type: nauc_map_at_1000_diff1
value: 72.66688261123035
- type: nauc_map_at_1000_max
value: 61.02926282006283
- type: nauc_map_at_1000_std
value: 11.084549829740526
- type: nauc_map_at_100_diff1
value: 72.66226192320828
- type: nauc_map_at_100_max
value: 61.04393223108811
- type: nauc_map_at_100_std
value: 11.101529343291695
- type: nauc_map_at_10_diff1
value: 72.66732266693091
- type: nauc_map_at_10_max
value: 61.24124296311832
- type: nauc_map_at_10_std
value: 10.91179451961794
- type: nauc_map_at_1_diff1
value: 74.2356464256346
- type: nauc_map_at_1_max
value: 54.06962758957632
- type: nauc_map_at_1_std
value: 0.8037891907963532
- type: nauc_map_at_20_diff1
value: 72.65198594061253
- type: nauc_map_at_20_max
value: 61.130159351448185
- type: nauc_map_at_20_std
value: 11.2246899245522
- type: nauc_map_at_3_diff1
value: 72.78578673303954
- type: nauc_map_at_3_max
value: 59.19073262936321
- type: nauc_map_at_3_std
value: 8.460301560522968
- type: nauc_map_at_5_diff1
value: 72.55004168261968
- type: nauc_map_at_5_max
value: 59.75181935082357
- type: nauc_map_at_5_std
value: 9.440299527201889
- type: nauc_mrr_at_1000_diff1
value: 72.82720348470325
- type: nauc_mrr_at_1000_max
value: 62.344231223741446
- type: nauc_mrr_at_1000_std
value: 12.60196558488974
- type: nauc_mrr_at_100_diff1
value: 72.82236849255094
- type: nauc_mrr_at_100_max
value: 62.35799491393125
- type: nauc_mrr_at_100_std
value: 12.617900773655673
- type: nauc_mrr_at_10_diff1
value: 72.7722847495086
- type: nauc_mrr_at_10_max
value: 62.66642401155435
- type: nauc_mrr_at_10_std
value: 12.906381237738746
- type: nauc_mrr_at_1_diff1
value: 74.71208073612343
- type: nauc_mrr_at_1_max
value: 59.50430394775893
- type: nauc_mrr_at_1_std
value: 8.129514198080512
- type: nauc_mrr_at_20_diff1
value: 72.78312367361772
- type: nauc_mrr_at_20_max
value: 62.421122493761885
- type: nauc_mrr_at_20_std
value: 12.693437522498588
- type: nauc_mrr_at_3_diff1
value: 73.50670156385345
- type: nauc_mrr_at_3_max
value: 62.01717537699209
- type: nauc_mrr_at_3_std
value: 11.926548252191182
- type: nauc_mrr_at_5_diff1
value: 72.62204028549876
- type: nauc_mrr_at_5_max
value: 62.319358766312085
- type: nauc_mrr_at_5_std
value: 13.081257923284342
- type: nauc_ndcg_at_1000_diff1
value: 72.29960539074736
- type: nauc_ndcg_at_1000_max
value: 62.75096959221402
- type: nauc_ndcg_at_1000_std
value: 13.81528462505362
- type: nauc_ndcg_at_100_diff1
value: 72.19985782073529
- type: nauc_ndcg_at_100_max
value: 63.18837705326287
- type: nauc_ndcg_at_100_std
value: 14.506479655117138
- type: nauc_ndcg_at_10_diff1
value: 71.85759847832983
- type: nauc_ndcg_at_10_max
value: 64.150996056865
- type: nauc_ndcg_at_10_std
value: 14.580606901634278
- type: nauc_ndcg_at_1_diff1
value: 74.71208073612343
- type: nauc_ndcg_at_1_max
value: 59.50430394775893
- type: nauc_ndcg_at_1_std
value: 8.129514198080512
- type: nauc_ndcg_at_20_diff1
value: 71.80987178228351
- type: nauc_ndcg_at_20_max
value: 63.56269460865743
- type: nauc_ndcg_at_20_std
value: 15.024978004625922
- type: nauc_ndcg_at_3_diff1
value: 72.35095651602592
- type: nauc_ndcg_at_3_max
value: 61.60548011855679
- type: nauc_ndcg_at_3_std
value: 12.048248788835263
- type: nauc_ndcg_at_5_diff1
value: 71.48615621881864
- type: nauc_ndcg_at_5_max
value: 61.72870035979784
- type: nauc_ndcg_at_5_std
value: 12.83048357446691
- type: nauc_precision_at_1000_diff1
value: -14.743011420972
- type: nauc_precision_at_1000_max
value: 19.281995763080158
- type: nauc_precision_at_1000_std
value: 49.6140660398164
- type: nauc_precision_at_100_diff1
value: 0.11278174806205563
- type: nauc_precision_at_100_max
value: 29.704511820077332
- type: nauc_precision_at_100_std
value: 47.84916954122579
- type: nauc_precision_at_10_diff1
value: 20.498227967235728
- type: nauc_precision_at_10_max
value: 47.883119365891595
- type: nauc_precision_at_10_std
value: 45.182178693450595
- type: nauc_precision_at_1_diff1
value: 74.71208073612343
- type: nauc_precision_at_1_max
value: 59.50430394775893
- type: nauc_precision_at_1_std
value: 8.129514198080512
- type: nauc_precision_at_20_diff1
value: 12.551737222341455
- type: nauc_precision_at_20_max
value: 40.618899501225634
- type: nauc_precision_at_20_std
value: 48.5598454249067
- type: nauc_precision_at_3_diff1
value: 47.67720764601145
- type: nauc_precision_at_3_max
value: 56.50632017305064
- type: nauc_precision_at_3_std
value: 31.14175140162157
- type: nauc_precision_at_5_diff1
value: 35.10058622792819
- type: nauc_precision_at_5_max
value: 51.88948872657981
- type: nauc_precision_at_5_std
value: 37.62796957461928
- type: nauc_recall_at_1000_diff1
value: 79.57516339869238
- type: nauc_recall_at_1000_max
value: 86.11111111111035
- type: nauc_recall_at_1000_std
value: 79.57516339869238
- type: nauc_recall_at_100_diff1
value: 70.50859559510081
- type: nauc_recall_at_100_max
value: 79.17009941231396
- type: nauc_recall_at_100_std
value: 44.32910419069595
- type: nauc_recall_at_10_diff1
value: 66.16118569361245
- type: nauc_recall_at_10_max
value: 74.73542948302286
- type: nauc_recall_at_10_std
value: 27.680330939810037
- type: nauc_recall_at_1_diff1
value: 74.2356464256346
- type: nauc_recall_at_1_max
value: 54.06962758957632
- type: nauc_recall_at_1_std
value: 0.8037891907963532
- type: nauc_recall_at_20_diff1
value: 65.4748436545527
- type: nauc_recall_at_20_max
value: 73.81532199081235
- type: nauc_recall_at_20_std
value: 33.59324708196253
- type: nauc_recall_at_3_diff1
value: 68.83194804473622
- type: nauc_recall_at_3_max
value: 61.77722610439669
- type: nauc_recall_at_3_std
value: 13.984923756556714
- type: nauc_recall_at_5_diff1
value: 65.51467417209523
- type: nauc_recall_at_5_max
value: 64.08276291427661
- type: nauc_recall_at_5_std
value: 19.976472037847167
- type: ndcg_at_1
value: 60.0
- type: ndcg_at_10
value: 71.586
- type: ndcg_at_100
value: 73.76899999999999
- type: ndcg_at_1000
value: 74.386
- type: ndcg_at_20
value: 72.612
- type: ndcg_at_3
value: 66.944
- type: ndcg_at_5
value: 69.333
- type: precision_at_1
value: 60.0
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.073
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 5.033
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 17.4
- type: recall_at_1
value: 56.760999999999996
- type: recall_at_10
value: 84.589
- type: recall_at_100
value: 94.333
- type: recall_at_1000
value: 99.333
- type: recall_at_20
value: 88.43299999999999
- type: recall_at_3
value: 72.10600000000001
- type: recall_at_5
value: 78.194
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 84.60600000000001
- type: map_at_1
value: 0.257
- type: map_at_10
value: 2.196
- type: map_at_100
value: 13.252
- type: map_at_1000
value: 31.473000000000003
- type: map_at_20
value: 4.023000000000001
- type: map_at_3
value: 0.722
- type: map_at_5
value: 1.146
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 97.0
- type: mrr_at_100
value: 97.0
- type: mrr_at_1000
value: 97.0
- type: mrr_at_20
value: 97.0
- type: mrr_at_3
value: 97.0
- type: mrr_at_5
value: 97.0
- type: nauc_map_at_1000_diff1
value: -30.674816554207062
- type: nauc_map_at_1000_max
value: 53.18598689657068
- type: nauc_map_at_1000_std
value: 78.88325309469121
- type: nauc_map_at_100_diff1
value: -17.6877824653978
- type: nauc_map_at_100_max
value: 19.584159765315658
- type: nauc_map_at_100_std
value: 48.051154190992726
- type: nauc_map_at_10_diff1
value: 20.076631089898626
- type: nauc_map_at_10_max
value: -8.642556160185636
- type: nauc_map_at_10_std
value: -5.768698617334298
- type: nauc_map_at_1_diff1
value: 27.342260509653798
- type: nauc_map_at_1_max
value: -23.400451210297994
- type: nauc_map_at_1_std
value: -21.152006353733853
- type: nauc_map_at_20_diff1
value: 8.019321726240506
- type: nauc_map_at_20_max
value: -1.4826378210544222
- type: nauc_map_at_20_std
value: 5.698208117745366
- type: nauc_map_at_3_diff1
value: 32.073377946749446
- type: nauc_map_at_3_max
value: -13.099353983204654
- type: nauc_map_at_3_std
value: -15.36319127398037
- type: nauc_map_at_5_diff1
value: 22.500045815797876
- type: nauc_map_at_5_max
value: -8.548135411428023
- type: nauc_map_at_5_std
value: -8.547850460331334
- type: nauc_mrr_at_1000_diff1
value: -6.022408963585526
- type: nauc_mrr_at_1000_max
value: 4.481792717087155
- type: nauc_mrr_at_1000_std
value: 51.6962340491753
- type: nauc_mrr_at_100_diff1
value: -6.022408963585526
- type: nauc_mrr_at_100_max
value: 4.481792717087155
- type: nauc_mrr_at_100_std
value: 51.6962340491753
- type: nauc_mrr_at_10_diff1
value: -6.022408963585526
- type: nauc_mrr_at_10_max
value: 4.481792717087155
- type: nauc_mrr_at_10_std
value: 51.6962340491753
- type: nauc_mrr_at_1_diff1
value: -6.022408963585076
- type: nauc_mrr_at_1_max
value: 4.481792717087146
- type: nauc_mrr_at_1_std
value: 51.69623404917518
- type: nauc_mrr_at_20_diff1
value: -6.022408963585526
- type: nauc_mrr_at_20_max
value: 4.481792717087155
- type: nauc_mrr_at_20_std
value: 51.6962340491753
- type: nauc_mrr_at_3_diff1
value: -6.022408963585526
- type: nauc_mrr_at_3_max
value: 4.481792717087155
- type: nauc_mrr_at_3_std
value: 51.6962340491753
- type: nauc_mrr_at_5_diff1
value: -6.022408963585526
- type: nauc_mrr_at_5_max
value: 4.481792717087155
- type: nauc_mrr_at_5_std
value: 51.6962340491753
- type: nauc_ndcg_at_1000_diff1
value: -20.79697283984295
- type: nauc_ndcg_at_1000_max
value: 52.97671908009218
- type: nauc_ndcg_at_1000_std
value: 75.43907707019758
- type: nauc_ndcg_at_100_diff1
value: -38.620752706946455
- type: nauc_ndcg_at_100_max
value: 49.41307462381511
- type: nauc_ndcg_at_100_std
value: 81.33299379244252
- type: nauc_ndcg_at_10_diff1
value: -18.611906363037356
- type: nauc_ndcg_at_10_max
value: 44.20544651664479
- type: nauc_ndcg_at_10_std
value: 61.322552829935816
- type: nauc_ndcg_at_1_diff1
value: 18.625935567849073
- type: nauc_ndcg_at_1_max
value: -10.104132769280879
- type: nauc_ndcg_at_1_std
value: 22.449560689879743
- type: nauc_ndcg_at_20_diff1
value: -30.61130208138771
- type: nauc_ndcg_at_20_max
value: 52.68851710375231
- type: nauc_ndcg_at_20_std
value: 69.72357683382992
- type: nauc_ndcg_at_3_diff1
value: 5.695394821691213
- type: nauc_ndcg_at_3_max
value: 37.909122367102135
- type: nauc_ndcg_at_3_std
value: 46.2366603255159
- type: nauc_ndcg_at_5_diff1
value: -15.273067832464731
- type: nauc_ndcg_at_5_max
value: 49.7054639475091
- type: nauc_ndcg_at_5_std
value: 58.83754007826166
- type: nauc_precision_at_1000_diff1
value: -31.565302588492035
- type: nauc_precision_at_1000_max
value: 52.56214379514724
- type: nauc_precision_at_1000_std
value: 53.40618234326055
- type: nauc_precision_at_100_diff1
value: -44.67273120709088
- type: nauc_precision_at_100_max
value: 48.30381155522576
- type: nauc_precision_at_100_std
value: 82.1984661602578
- type: nauc_precision_at_10_diff1
value: -24.737383556860145
- type: nauc_precision_at_10_max
value: 52.816815002878556
- type: nauc_precision_at_10_std
value: 67.99052410030845
- type: nauc_precision_at_1_diff1
value: -6.022408963585076
- type: nauc_precision_at_1_max
value: 4.481792717087146
- type: nauc_precision_at_1_std
value: 51.69623404917518
- type: nauc_precision_at_20_diff1
value: -40.23628054967093
- type: nauc_precision_at_20_max
value: 56.980056980057014
- type: nauc_precision_at_20_std
value: 76.60976777785895
- type: nauc_precision_at_3_diff1
value: -4.661784068466279
- type: nauc_precision_at_3_max
value: 59.052007899934125
- type: nauc_precision_at_3_std
value: 58.187952600394986
- type: nauc_precision_at_5_diff1
value: -38.11848143512736
- type: nauc_precision_at_5_max
value: 68.6149353358365
- type: nauc_precision_at_5_std
value: 73.55652899457661
- type: nauc_recall_at_1000_diff1
value: -14.886527444436345
- type: nauc_recall_at_1000_max
value: 48.07492302795808
- type: nauc_recall_at_1000_std
value: 65.05623212485906
- type: nauc_recall_at_100_diff1
value: -8.148385729388195
- type: nauc_recall_at_100_max
value: 8.041615364614533
- type: nauc_recall_at_100_std
value: 33.77187914574611
- type: nauc_recall_at_10_diff1
value: 24.333628413035942
- type: nauc_recall_at_10_max
value: -14.577877145192078
- type: nauc_recall_at_10_std
value: -12.131819145098557
- type: nauc_recall_at_1_diff1
value: 27.342260509653798
- type: nauc_recall_at_1_max
value: -23.400451210297994
- type: nauc_recall_at_1_std
value: -21.152006353733853
- type: nauc_recall_at_20_diff1
value: 13.695556376785564
- type: nauc_recall_at_20_max
value: -8.872009346408264
- type: nauc_recall_at_20_std
value: -3.163199444247112
- type: nauc_recall_at_3_diff1
value: 32.00442538217753
- type: nauc_recall_at_3_max
value: -15.159737942664552
- type: nauc_recall_at_3_std
value: -17.530833132440645
- type: nauc_recall_at_5_diff1
value: 22.64740552912405
- type: nauc_recall_at_5_max
value: -12.947090597010414
- type: nauc_recall_at_5_std
value: -12.914478822476807
- type: ndcg_at_1
value: 88.0
- type: ndcg_at_10
value: 84.60600000000001
- type: ndcg_at_100
value: 64.31700000000001
- type: ndcg_at_1000
value: 56.40500000000001
- type: ndcg_at_20
value: 80.561
- type: ndcg_at_3
value: 87.87700000000001
- type: ndcg_at_5
value: 86.641
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 88.2
- type: precision_at_100
value: 65.9
- type: precision_at_1000
value: 25.019999999999996
- type: precision_at_20
value: 84.7
- type: precision_at_3
value: 92.0
- type: precision_at_5
value: 90.0
- type: recall_at_1
value: 0.257
- type: recall_at_10
value: 2.338
- type: recall_at_100
value: 15.831999999999999
- type: recall_at_1000
value: 52.519000000000005
- type: recall_at_20
value: 4.367
- type: recall_at_3
value: 0.74
- type: recall_at_5
value: 1.196
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 31.426
- type: map_at_1
value: 3.4709999999999996
- type: map_at_10
value: 13.236999999999998
- type: map_at_100
value: 19.521
- type: map_at_1000
value: 21.224
- type: map_at_20
value: 15.626000000000001
- type: map_at_3
value: 7.152
- type: map_at_5
value: 9.914000000000001
- type: mrr_at_1
value: 44.89795918367347
- type: mrr_at_10
value: 57.54373177842565
- type: mrr_at_100
value: 57.855267710139536
- type: mrr_at_1000
value: 57.855267710139536
- type: mrr_at_20
value: 57.70071764969724
- type: mrr_at_3
value: 52.72108843537414
- type: mrr_at_5
value: 55.06802721088435
- type: nauc_map_at_1000_diff1
value: 21.148857552115558
- type: nauc_map_at_1000_max
value: 2.0837572569021323
- type: nauc_map_at_1000_std
value: 3.203419709665347
- type: nauc_map_at_100_diff1
value: 21.383778167597878
- type: nauc_map_at_100_max
value: 0.965767943155967
- type: nauc_map_at_100_std
value: 0.3949924961020957
- type: nauc_map_at_10_diff1
value: 27.178555638086394
- type: nauc_map_at_10_max
value: 4.480675175857958
- type: nauc_map_at_10_std
value: -13.69553539513878
- type: nauc_map_at_1_diff1
value: 27.63901823865334
- type: nauc_map_at_1_max
value: -18.6387233237763
- type: nauc_map_at_1_std
value: -27.02164241863646
- type: nauc_map_at_20_diff1
value: 23.892104752374888
- type: nauc_map_at_20_max
value: 3.5343136621362348
- type: nauc_map_at_20_std
value: -8.765101188860816
- type: nauc_map_at_3_diff1
value: 22.065793929837493
- type: nauc_map_at_3_max
value: 0.8063396680860568
- type: nauc_map_at_3_std
value: -20.404849396621824
- type: nauc_map_at_5_diff1
value: 22.66626080580714
- type: nauc_map_at_5_max
value: 5.423340658352383
- type: nauc_map_at_5_std
value: -18.31523779843455
- type: nauc_mrr_at_1000_diff1
value: 30.520722269282665
- type: nauc_mrr_at_1000_max
value: -16.644959497742267
- type: nauc_mrr_at_1000_std
value: -16.3824126273053
- type: nauc_mrr_at_100_diff1
value: 30.520722269282665
- type: nauc_mrr_at_100_max
value: -16.644959497742267
- type: nauc_mrr_at_100_std
value: -16.3824126273053
- type: nauc_mrr_at_10_diff1
value: 30.428248939332974
- type: nauc_mrr_at_10_max
value: -16.300183919261585
- type: nauc_mrr_at_10_std
value: -15.404823235836309
- type: nauc_mrr_at_1_diff1
value: 27.041346572613474
- type: nauc_mrr_at_1_max
value: -23.181309312755804
- type: nauc_mrr_at_1_std
value: -24.33076726484014
- type: nauc_mrr_at_20_diff1
value: 30.676558567379303
- type: nauc_mrr_at_20_max
value: -16.914268763031416
- type: nauc_mrr_at_20_std
value: -15.77742854976336
- type: nauc_mrr_at_3_diff1
value: 31.718457109787096
- type: nauc_mrr_at_3_max
value: -15.508391132202235
- type: nauc_mrr_at_3_std
value: -20.33229438349494
- type: nauc_mrr_at_5_diff1
value: 28.73798376227693
- type: nauc_mrr_at_5_max
value: -16.086295031060196
- type: nauc_mrr_at_5_std
value: -15.644604635769321
- type: nauc_ndcg_at_1000_diff1
value: 22.158724660189606
- type: nauc_ndcg_at_1000_max
value: -3.1755686809941475
- type: nauc_ndcg_at_1000_std
value: 19.258386224159075
- type: nauc_ndcg_at_100_diff1
value: 21.83846748649288
- type: nauc_ndcg_at_100_max
value: -10.939957598756036
- type: nauc_ndcg_at_100_std
value: 14.729678880436623
- type: nauc_ndcg_at_10_diff1
value: 26.944882726098424
- type: nauc_ndcg_at_10_max
value: -3.5176483833346617
- type: nauc_ndcg_at_10_std
value: -5.400606773697211
- type: nauc_ndcg_at_1_diff1
value: 26.649410985172985
- type: nauc_ndcg_at_1_max
value: -18.806716526067493
- type: nauc_ndcg_at_1_std
value: -25.100244999343506
- type: nauc_ndcg_at_20_diff1
value: 24.860266153648315
- type: nauc_ndcg_at_20_max
value: -7.521401821712892
- type: nauc_ndcg_at_20_std
value: -3.3696577425983003
- type: nauc_ndcg_at_3_diff1
value: 23.9933326962406
- type: nauc_ndcg_at_3_max
value: -0.4609479344284664
- type: nauc_ndcg_at_3_std
value: -15.176459166869897
- type: nauc_ndcg_at_5_diff1
value: 22.50595978713142
- type: nauc_ndcg_at_5_max
value: -2.1093870656000857
- type: nauc_ndcg_at_5_std
value: -12.732197425528257
- type: nauc_precision_at_1000_diff1
value: -20.335120385950024
- type: nauc_precision_at_1000_max
value: 26.95109729939765
- type: nauc_precision_at_1000_std
value: 29.981685890622117
- type: nauc_precision_at_100_diff1
value: -2.782114329320704
- type: nauc_precision_at_100_max
value: 2.9489322002048604
- type: nauc_precision_at_100_std
value: 67.3074073674319
- type: nauc_precision_at_10_diff1
value: 21.385177180383383
- type: nauc_precision_at_10_max
value: -2.4696365259422817
- type: nauc_precision_at_10_std
value: 14.469784299536673
- type: nauc_precision_at_1_diff1
value: 27.041346572613474
- type: nauc_precision_at_1_max
value: -23.181309312755804
- type: nauc_precision_at_1_std
value: -24.33076726484014
- type: nauc_precision_at_20_diff1
value: 11.993846579997673
- type: nauc_precision_at_20_max
value: -2.4792189693296227
- type: nauc_precision_at_20_std
value: 28.581394687807745
- type: nauc_precision_at_3_diff1
value: 20.70568446328836
- type: nauc_precision_at_3_max
value: 0.37326398699875984
- type: nauc_precision_at_3_std
value: -12.983918676694389
- type: nauc_precision_at_5_diff1
value: 19.47466335828124
- type: nauc_precision_at_5_max
value: -1.8921617684385994
- type: nauc_precision_at_5_std
value: -6.533875294402164
- type: nauc_recall_at_1000_diff1
value: 7.611201305723156
- type: nauc_recall_at_1000_max
value: 5.6416194035820055
- type: nauc_recall_at_1000_std
value: 61.695208644278
- type: nauc_recall_at_100_diff1
value: 10.0183258158735
- type: nauc_recall_at_100_max
value: -10.950612455698973
- type: nauc_recall_at_100_std
value: 33.06069987640471
- type: nauc_recall_at_10_diff1
value: 24.738210305731535
- type: nauc_recall_at_10_max
value: -2.6592454032071546
- type: nauc_recall_at_10_std
value: -4.83987517793115
- type: nauc_recall_at_1_diff1
value: 27.63901823865334
- type: nauc_recall_at_1_max
value: -18.6387233237763
- type: nauc_recall_at_1_std
value: -27.02164241863646
- type: nauc_recall_at_20_diff1
value: 17.79601177409034
- type: nauc_recall_at_20_max
value: -6.681637093148051
- type: nauc_recall_at_20_std
value: 3.369193919932238
- type: nauc_recall_at_3_diff1
value: 24.9589431081204
- type: nauc_recall_at_3_max
value: 2.4783640980500232
- type: nauc_recall_at_3_std
value: -19.567415651090702
- type: nauc_recall_at_5_diff1
value: 23.71803410135437
- type: nauc_recall_at_5_max
value: 1.6294309357641652
- type: nauc_recall_at_5_std
value: -15.365511906408983
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 31.426
- type: ndcg_at_100
value: 41.558
- type: ndcg_at_1000
value: 53.042
- type: ndcg_at_20
value: 31.108999999999998
- type: ndcg_at_3
value: 35.518
- type: ndcg_at_5
value: 33.235
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 27.551
- type: precision_at_100
value: 8.204
- type: precision_at_1000
value: 1.582
- type: precision_at_20
value: 19.796
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.4709999999999996
- type: recall_at_10
value: 19.563
- type: recall_at_100
value: 50.3
- type: recall_at_1000
value: 85.13199999999999
- type: recall_at_20
value: 26.738
- type: recall_at_3
value: 7.8420000000000005
- type: recall_at_5
value: 11.994
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.29850746268657
- type: ap
value: 30.109785890841966
- type: ap_weighted
value: 30.109785890841966
- type: f1
value: 61.76875915202924
- type: f1_weighted
value: 71.32073190458556
- type: main_score
value: 68.29850746268657
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.3068
- type: ap
value: 86.17914339624038
- type: ap_weighted
value: 86.17914339624038
- type: f1
value: 90.29716826358077
- type: f1_weighted
value: 90.29716826358077
- type: main_score
value: 90.3068
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 46.272000000000006
- type: f1
value: 45.57042543386915
- type: f1_weighted
value: 45.57042543386915
- type: main_score
value: 46.272000000000006
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 44.9469238081379
- type: v_measure
value: 44.9469238081379
- type: v_measure_std
value: 13.26811262671461
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 34.12071448053325
- type: v_measure
value: 34.12071448053325
- type: v_measure_std
value: 13.7019879046405
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 61.597667288657846
- type: map
value: 61.597667288657846
- type: mrr
value: 75.57940904893813
- type: nAUC_map_diff1
value: 8.745172077340095
- type: nAUC_map_max
value: 20.114863024035493
- type: nAUC_map_std
value: 15.991351189572192
- type: nAUC_mrr_diff1
value: 20.781369244159983
- type: nAUC_mrr_max
value: 30.78542570228559
- type: nAUC_mrr_std
value: 19.861484857303676
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 88.55587996301419
- type: cosine_spearman
value: 86.40317357420093
- type: euclidean_pearson
value: 86.93771958250231
- type: euclidean_spearman
value: 86.40317357420093
- type: main_score
value: 86.40317357420093
- type: manhattan_pearson
value: 86.92196577117366
- type: manhattan_spearman
value: 85.79834051556095
- type: pearson
value: 88.55587996301419
- type: spearman
value: 86.40317357420093
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.0064935064935
- type: f1
value: 79.29524254086299
- type: f1_weighted
value: 79.295242540863
- type: main_score
value: 80.0064935064935
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 35.27186813341181
- type: v_measure
value: 35.27186813341181
- type: v_measure_std
value: 0.8621482145872432
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 28.411805064852295
- type: v_measure
value: 28.411805064852295
- type: v_measure_std
value: 0.7194290078011281
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 43.675
- type: f1
value: 40.15061931375577
- type: f1_weighted
value: 45.714186572727066
- type: main_score
value: 43.675
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 84.35640000000001
- type: ap
value: 79.07507736685174
- type: ap_weighted
value: 79.07507736685174
- type: f1
value: 84.32288494833531
- type: f1_weighted
value: 84.32288494833531
- type: main_score
value: 84.35640000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.35658914728684
- type: f1
value: 90.86877537911086
- type: f1_weighted
value: 91.3282092774443
- type: main_score
value: 91.35658914728684
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 60.63611491108071
- type: f1
value: 42.78886482112741
- type: f1_weighted
value: 63.44208631840539
- type: main_score
value: 60.63611491108071
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 66.68796234028245
- type: f1
value: 64.44940791000278
- type: f1_weighted
value: 65.77554417406792
- type: main_score
value: 66.68796234028245
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 73.0598520511096
- type: f1
value: 72.14267273884774
- type: f1_weighted
value: 72.93345180137516
- type: main_score
value: 73.0598520511096
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 31.143081341699606
- type: v_measure
value: 31.143081341699606
- type: v_measure_std
value: 1.5578716347076906
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 27.010818869829556
- type: v_measure
value: 27.010818869829556
- type: v_measure_std
value: 1.1771554540819378
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 30.20503776754942
- type: map
value: 30.20503776754942
- type: mrr
value: 31.076636002733437
- type: nAUC_map_diff1
value: 7.290568655287842
- type: nAUC_map_max
value: -21.381599355932945
- type: nAUC_map_std
value: -7.709920607543168
- type: nAUC_mrr_diff1
value: 7.558397329284913
- type: nAUC_mrr_max
value: -15.981397186427607
- type: nAUC_mrr_std
value: -4.870495243168834
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 51.85893476633338
- type: v_measure
value: 51.85893476633338
- type: v_measure_std
value: 4.704770139385852
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 61.8124222918822
- type: v_measure
value: 61.8124222918822
- type: v_measure_std
value: 11.994472578100165
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 77.63310776935984
- type: cosine_spearman
value: 69.86468291111039
- type: euclidean_pearson
value: 73.91537077798837
- type: euclidean_spearman
value: 69.86468376650203
- type: main_score
value: 69.86468291111039
- type: manhattan_pearson
value: 73.68616048370464
- type: manhattan_spearman
value: 69.76232036206659
- type: pearson
value: 77.63310776935984
- type: spearman
value: 69.86468291111039
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 57.71716838245049
- type: cosine_spearman
value: 61.797855543446424
- type: euclidean_pearson
value: 58.22958675325848
- type: euclidean_spearman
value: 61.797855543446424
- type: main_score
value: 61.797855543446424
- type: manhattan_pearson
value: 57.63117544997929
- type: manhattan_spearman
value: 61.3629404350085
- type: pearson
value: 57.71716838245049
- type: spearman
value: 61.797855543446424
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 82.30260026790903
- type: cosine_spearman
value: 82.66959813070869
- type: euclidean_pearson
value: 82.08383017580783
- type: euclidean_spearman
value: 82.66959813070869
- type: main_score
value: 82.66959813070869
- type: manhattan_pearson
value: 81.77991451392153
- type: manhattan_spearman
value: 82.3652534745606
- type: pearson
value: 82.30260026790903
- type: spearman
value: 82.66959813070869
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 71.50608384084478
- type: cosine_spearman
value: 68.94968064977785
- type: euclidean_pearson
value: 70.73381299949564
- type: euclidean_spearman
value: 68.94968064977785
- type: main_score
value: 68.94968064977785
- type: manhattan_pearson
value: 70.5385486953787
- type: manhattan_spearman
value: 68.82132770672365
- type: pearson
value: 71.50608384084478
- type: spearman
value: 68.94968064977785
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.66969825874907
- type: cosine_spearman
value: 75.55374982088381
- type: euclidean_pearson
value: 75.9339313749594
- type: euclidean_spearman
value: 75.55374982088381
- type: main_score
value: 75.55374982088381
- type: manhattan_pearson
value: 75.88287553383817
- type: manhattan_spearman
value: 75.50729812977688
- type: pearson
value: 73.66969825874907
- type: spearman
value: 75.55374982088381
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 74.5954724414016
- type: cosine_spearman
value: 77.2688820850505
- type: euclidean_pearson
value: 77.19866353971555
- type: euclidean_spearman
value: 77.2688820850505
- type: main_score
value: 77.2688820850505
- type: manhattan_pearson
value: 77.27072603680978
- type: manhattan_spearman
value: 77.29408453673607
- type: pearson
value: 74.5954724414016
- type: spearman
value: 77.2688820850505
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 71.52588722654055
- type: cosine_spearman
value: 74.97235736456061
- type: euclidean_pearson
value: 74.51952528854038
- type: euclidean_spearman
value: 74.97235736456061
- type: main_score
value: 74.97235736456061
- type: manhattan_pearson
value: 74.48272300884209
- type: manhattan_spearman
value: 74.80633649415176
- type: pearson
value: 71.52588722654055
- type: spearman
value: 74.97235736456061
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 68.80031120401976
- type: cosine_spearman
value: 69.07945196478491
- type: euclidean_pearson
value: 68.99674496430792
- type: euclidean_spearman
value: 69.07945196478491
- type: main_score
value: 69.07945196478491
- type: manhattan_pearson
value: 69.00236107775687
- type: manhattan_spearman
value: 68.98064879049272
- type: pearson
value: 68.80031120401976
- type: spearman
value: 69.07945196478491
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 65.6898007230089
- type: cosine_spearman
value: 69.72386211803668
- type: euclidean_pearson
value: 69.04523003701475
- type: euclidean_spearman
value: 69.72386211803668
- type: main_score
value: 69.72386211803668
- type: manhattan_pearson
value: 68.80479743770702
- type: manhattan_spearman
value: 69.43264575177459
- type: pearson
value: 65.6898007230089
- type: spearman
value: 69.72386211803668
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 79.74088066874383
- type: map
value: 79.74088066874383
- type: mrr
value: 94.47697455050397
- type: nAUC_map_diff1
value: 8.036086256905502
- type: nAUC_map_max
value: 54.88199803816819
- type: nAUC_map_std
value: 69.16267942176574
- type: nAUC_mrr_diff1
value: 50.020738477678115
- type: nAUC_mrr_max
value: 83.28922770326483
- type: nAUC_mrr_std
value: 83.63973501802224
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.83861386138614
- type: cosine_accuracy_threshold
value: 74.75666999816895
- type: cosine_ap
value: 96.15132792066652
- type: cosine_f1
value: 91.84890656063618
- type: cosine_f1_threshold
value: 71.70594930648804
- type: cosine_precision
value: 91.30434782608695
- type: cosine_recall
value: 92.4
- type: dot_accuracy
value: 99.83861386138614
- type: dot_accuracy_threshold
value: 74.75666999816895
- type: dot_ap
value: 96.15132792066653
- type: dot_f1
value: 91.84890656063618
- type: dot_f1_threshold
value: 71.70596122741699
- type: dot_precision
value: 91.30434782608695
- type: dot_recall
value: 92.4
- type: euclidean_accuracy
value: 99.83861386138614
- type: euclidean_accuracy_threshold
value: 71.05395793914795
- type: euclidean_ap
value: 96.15132792066652
- type: euclidean_f1
value: 91.84890656063618
- type: euclidean_f1_threshold
value: 75.22505521774292
- type: euclidean_precision
value: 91.30434782608695
- type: euclidean_recall
value: 92.4
- type: main_score
value: 96.15132792066653
- type: manhattan_accuracy
value: 99.83564356435643
- type: manhattan_accuracy_threshold
value: 1547.6950645446777
- type: manhattan_ap
value: 96.06151211452136
- type: manhattan_f1
value: 91.61676646706587
- type: manhattan_f1_threshold
value: 1626.3608932495117
- type: manhattan_precision
value: 91.43426294820716
- type: manhattan_recall
value: 91.8
- type: max_ap
value: 96.15132792066653
- type: max_f1
value: 91.84890656063618
- type: max_precision
value: 91.43426294820716
- type: max_recall
value: 92.4
- type: similarity_accuracy
value: 99.83861386138614
- type: similarity_accuracy_threshold
value: 74.75666999816895
- type: similarity_ap
value: 96.15132792066652
- type: similarity_f1
value: 91.84890656063618
- type: similarity_f1_threshold
value: 71.70594930648804
- type: similarity_precision
value: 91.30434782608695
- type: similarity_recall
value: 92.4
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 61.24120328328453
- type: v_measure
value: 61.24120328328453
- type: v_measure_std
value: 3.9946560691100372
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 33.808268374864745
- type: v_measure
value: 33.808268374864745
- type: v_measure_std
value: 1.2212188701887239
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 52.19806018468037
- type: map
value: 52.19806018468037
- type: mrr
value: 52.98921462524404
- type: nAUC_map_diff1
value: 37.41443156995912
- type: nAUC_map_max
value: 9.410262727675603
- type: nAUC_map_std
value: 8.7094185014992
- type: nAUC_mrr_diff1
value: 37.78202772392581
- type: nAUC_mrr_max
value: 10.517635536565816
- type: nAUC_mrr_std
value: 8.509423813772491
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.48413700430812
- type: cosine_spearman
value: 30.357162200875816
- type: dot_pearson
value: 30.484140144824938
- type: dot_spearman
value: 30.357162200875816
- type: main_score
value: 30.357162200875816
- type: pearson
value: 30.48413700430812
- type: spearman
value: 30.357162200875816
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 66.8359375
- type: ap
value: 12.482653786025985
- type: ap_weighted
value: 12.482653786025985
- type: f1
value: 51.328608527332385
- type: f1_weighted
value: 74.07974463955398
- type: main_score
value: 66.8359375
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 53.907753254103
- type: f1
value: 54.22707647269581
- type: f1_weighted
value: 53.611822984407695
- type: main_score
value: 53.907753254103
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 38.1364789307295
- type: v_measure
value: 38.1364789307295
- type: v_measure_std
value: 2.0731634966352077
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 82.66674614054956
- type: cosine_accuracy_threshold
value: 79.80123162269592
- type: cosine_ap
value: 63.28209719072804
- type: cosine_f1
value: 60.16389710903711
- type: cosine_f1_threshold
value: 72.22893834114075
- type: cosine_precision
value: 52.90232185748599
- type: cosine_recall
value: 69.73614775725594
- type: dot_accuracy
value: 82.66674614054956
- type: dot_accuracy_threshold
value: 79.8012375831604
- type: dot_ap
value: 63.282103870645166
- type: dot_f1
value: 60.16389710903711
- type: dot_f1_threshold
value: 72.22894430160522
- type: dot_precision
value: 52.90232185748599
- type: dot_recall
value: 69.73614775725594
- type: euclidean_accuracy
value: 82.66674614054956
- type: euclidean_accuracy_threshold
value: 63.55905532836914
- type: euclidean_ap
value: 63.282095399953164
- type: euclidean_f1
value: 60.16389710903711
- type: euclidean_f1_threshold
value: 74.5265781879425
- type: euclidean_precision
value: 52.90232185748599
- type: euclidean_recall
value: 69.73614775725594
- type: main_score
value: 63.282103870645166
- type: manhattan_accuracy
value: 82.74423317637242
- type: manhattan_accuracy_threshold
value: 1415.380859375
- type: manhattan_ap
value: 63.26931757839598
- type: manhattan_f1
value: 60.11014948859166
- type: manhattan_f1_threshold
value: 1632.522201538086
- type: manhattan_precision
value: 52.359506559624045
- type: manhattan_recall
value: 70.55408970976254
- type: max_ap
value: 63.282103870645166
- type: max_f1
value: 60.16389710903711
- type: max_precision
value: 52.90232185748599
- type: max_recall
value: 70.55408970976254
- type: similarity_accuracy
value: 82.66674614054956
- type: similarity_accuracy_threshold
value: 79.80123162269592
- type: similarity_ap
value: 63.28209719072804
- type: similarity_f1
value: 60.16389710903711
- type: similarity_f1_threshold
value: 72.22893834114075
- type: similarity_precision
value: 52.90232185748599
- type: similarity_recall
value: 69.73614775725594
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 88.10105949470253
- type: cosine_accuracy_threshold
value: 68.95147562026978
- type: cosine_ap
value: 84.65516103854583
- type: cosine_f1
value: 76.54581123301605
- type: cosine_f1_threshold
value: 63.92929553985596
- type: cosine_precision
value: 72.46526344751685
- type: cosine_recall
value: 81.11333538651063
- type: dot_accuracy
value: 88.10105949470253
- type: dot_accuracy_threshold
value: 68.95147562026978
- type: dot_ap
value: 84.65516301437592
- type: dot_f1
value: 76.54581123301605
- type: dot_f1_threshold
value: 63.92928957939148
- type: dot_precision
value: 72.46526344751685
- type: dot_recall
value: 81.11333538651063
- type: euclidean_accuracy
value: 88.10105949470253
- type: euclidean_accuracy_threshold
value: 78.80169153213501
- type: euclidean_ap
value: 84.65517268264233
- type: euclidean_f1
value: 76.54581123301605
- type: euclidean_f1_threshold
value: 84.93610620498657
- type: euclidean_precision
value: 72.46526344751685
- type: euclidean_recall
value: 81.11333538651063
- type: main_score
value: 84.65517268264233
- type: manhattan_accuracy
value: 88.08941669577366
- type: manhattan_accuracy_threshold
value: 1739.3169403076172
- type: manhattan_ap
value: 84.64592398855694
- type: manhattan_f1
value: 76.62890540443034
- type: manhattan_f1_threshold
value: 1861.344337463379
- type: manhattan_precision
value: 72.09775967413442
- type: manhattan_recall
value: 81.76778564829073
- type: max_ap
value: 84.65517268264233
- type: max_f1
value: 76.62890540443034
- type: max_precision
value: 72.46526344751685
- type: max_recall
value: 81.76778564829073
- type: similarity_accuracy
value: 88.10105949470253
- type: similarity_accuracy_threshold
value: 68.95147562026978
- type: similarity_ap
value: 84.65516103854583
- type: similarity_f1
value: 76.54581123301605
- type: similarity_f1_threshold
value: 63.92929553985596
- type: similarity_precision
value: 72.46526344751685
- type: similarity_recall
value: 81.11333538651063
---
# lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF
This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-m-v1.5`](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lynxeco/snowflake-arctic-embed-m-v1.5-Q8_0-GGUF --hf-file snowflake-arctic-embed-m-v1.5-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF | magicunicorn | feature-extraction | [
"sentence-transformers",
"gguf",
"mteb",
"transformers.js",
"transformers",
"llama-cpp",
"gguf-my-repo",
"feature-extraction",
"en",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:quantized:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-01-12T20:58:40 | 2025-01-12T23:12:53 | 56 | 0 | ---
base_model: mixedbread-ai/mxbai-embed-large-v1
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- mteb
- transformers.js
- transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: mxbai-angle-large-v1
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.044776119403
- type: ap
value: 37.7362433623053
- type: f1
value: 68.92736573359774
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.84025000000001
- type: ap
value: 90.93190875404055
- type: f1
value: 93.8297833897293
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 49.184
- type: f1
value: 48.74163227751588
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 41.252
- type: map_at_10
value: 57.778
- type: map_at_100
value: 58.233000000000004
- type: map_at_1000
value: 58.23700000000001
- type: map_at_3
value: 53.449999999999996
- type: map_at_5
value: 56.376000000000005
- type: mrr_at_1
value: 41.679
- type: mrr_at_10
value: 57.92699999999999
- type: mrr_at_100
value: 58.389
- type: mrr_at_1000
value: 58.391999999999996
- type: mrr_at_3
value: 53.651
- type: mrr_at_5
value: 56.521
- type: ndcg_at_1
value: 41.252
- type: ndcg_at_10
value: 66.018
- type: ndcg_at_100
value: 67.774
- type: ndcg_at_1000
value: 67.84400000000001
- type: ndcg_at_3
value: 57.372
- type: ndcg_at_5
value: 62.646
- type: precision_at_1
value: 41.252
- type: precision_at_10
value: 9.189
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.902
- type: precision_at_5
value: 16.302
- type: recall_at_1
value: 41.252
- type: recall_at_10
value: 91.892
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 68.706
- type: recall_at_5
value: 81.50800000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.97294504317859
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.98071077674629
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 65.16477858490782
- type: mrr
value: 78.23583080508287
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.6277629421789
- type: cos_sim_spearman
value: 88.4056288400568
- type: euclidean_pearson
value: 87.94871847578163
- type: euclidean_spearman
value: 88.4056288400568
- type: manhattan_pearson
value: 87.73271254229648
- type: manhattan_spearman
value: 87.91826833762677
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.81818181818181
- type: f1
value: 87.79879337316918
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.91773608582761
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.73059477462478
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.745999999999995
- type: map_at_10
value: 43.632
- type: map_at_100
value: 45.206
- type: map_at_1000
value: 45.341
- type: map_at_3
value: 39.956
- type: map_at_5
value: 42.031
- type: mrr_at_1
value: 39.485
- type: mrr_at_10
value: 49.537
- type: mrr_at_100
value: 50.249
- type: mrr_at_1000
value: 50.294000000000004
- type: mrr_at_3
value: 46.757
- type: mrr_at_5
value: 48.481
- type: ndcg_at_1
value: 39.485
- type: ndcg_at_10
value: 50.058
- type: ndcg_at_100
value: 55.586
- type: ndcg_at_1000
value: 57.511
- type: ndcg_at_3
value: 44.786
- type: ndcg_at_5
value: 47.339999999999996
- type: precision_at_1
value: 39.485
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.552
- type: precision_at_1000
value: 0.202
- type: precision_at_3
value: 21.412
- type: precision_at_5
value: 15.479000000000001
- type: recall_at_1
value: 32.745999999999995
- type: recall_at_10
value: 62.056
- type: recall_at_100
value: 85.088
- type: recall_at_1000
value: 96.952
- type: recall_at_3
value: 46.959
- type: recall_at_5
value: 54.06999999999999
- type: map_at_1
value: 31.898
- type: map_at_10
value: 42.142
- type: map_at_100
value: 43.349
- type: map_at_1000
value: 43.483
- type: map_at_3
value: 39.18
- type: map_at_5
value: 40.733000000000004
- type: mrr_at_1
value: 39.617999999999995
- type: mrr_at_10
value: 47.922
- type: mrr_at_100
value: 48.547000000000004
- type: mrr_at_1000
value: 48.597
- type: mrr_at_3
value: 45.86
- type: mrr_at_5
value: 46.949000000000005
- type: ndcg_at_1
value: 39.617999999999995
- type: ndcg_at_10
value: 47.739
- type: ndcg_at_100
value: 51.934999999999995
- type: ndcg_at_1000
value: 54.007000000000005
- type: ndcg_at_3
value: 43.748
- type: ndcg_at_5
value: 45.345
- type: precision_at_1
value: 39.617999999999995
- type: precision_at_10
value: 8.962
- type: precision_at_100
value: 1.436
- type: precision_at_1000
value: 0.192
- type: precision_at_3
value: 21.083
- type: precision_at_5
value: 14.752
- type: recall_at_1
value: 31.898
- type: recall_at_10
value: 57.587999999999994
- type: recall_at_100
value: 75.323
- type: recall_at_1000
value: 88.304
- type: recall_at_3
value: 45.275
- type: recall_at_5
value: 49.99
- type: map_at_1
value: 40.458
- type: map_at_10
value: 52.942
- type: map_at_100
value: 53.974
- type: map_at_1000
value: 54.031
- type: map_at_3
value: 49.559999999999995
- type: map_at_5
value: 51.408
- type: mrr_at_1
value: 46.27
- type: mrr_at_10
value: 56.31699999999999
- type: mrr_at_100
value: 56.95099999999999
- type: mrr_at_1000
value: 56.98
- type: mrr_at_3
value: 53.835
- type: mrr_at_5
value: 55.252
- type: ndcg_at_1
value: 46.27
- type: ndcg_at_10
value: 58.964000000000006
- type: ndcg_at_100
value: 62.875
- type: ndcg_at_1000
value: 63.969
- type: ndcg_at_3
value: 53.297000000000004
- type: ndcg_at_5
value: 55.938
- type: precision_at_1
value: 46.27
- type: precision_at_10
value: 9.549000000000001
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 23.762
- type: precision_at_5
value: 16.262999999999998
- type: recall_at_1
value: 40.458
- type: recall_at_10
value: 73.446
- type: recall_at_100
value: 90.12400000000001
- type: recall_at_1000
value: 97.795
- type: recall_at_3
value: 58.123000000000005
- type: recall_at_5
value: 64.68
- type: map_at_1
value: 27.443
- type: map_at_10
value: 36.081
- type: map_at_100
value: 37.163000000000004
- type: map_at_1000
value: 37.232
- type: map_at_3
value: 33.308
- type: map_at_5
value: 34.724
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.138
- type: mrr_at_100
value: 39.065
- type: mrr_at_1000
value: 39.119
- type: mrr_at_3
value: 35.593
- type: mrr_at_5
value: 36.785000000000004
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.134
- type: ndcg_at_100
value: 46.300999999999995
- type: ndcg_at_1000
value: 48.106
- type: ndcg_at_3
value: 35.77
- type: ndcg_at_5
value: 38.032
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.249
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 15.065999999999999
- type: precision_at_5
value: 10.373000000000001
- type: recall_at_1
value: 27.443
- type: recall_at_10
value: 54.80199999999999
- type: recall_at_100
value: 78.21900000000001
- type: recall_at_1000
value: 91.751
- type: recall_at_3
value: 40.211000000000006
- type: recall_at_5
value: 45.599000000000004
- type: map_at_1
value: 18.731
- type: map_at_10
value: 26.717999999999996
- type: map_at_100
value: 27.897
- type: map_at_1000
value: 28.029
- type: map_at_3
value: 23.91
- type: map_at_5
value: 25.455
- type: mrr_at_1
value: 23.134
- type: mrr_at_10
value: 31.769
- type: mrr_at_100
value: 32.634
- type: mrr_at_1000
value: 32.707
- type: mrr_at_3
value: 28.938999999999997
- type: mrr_at_5
value: 30.531000000000002
- type: ndcg_at_1
value: 23.134
- type: ndcg_at_10
value: 32.249
- type: ndcg_at_100
value: 37.678
- type: ndcg_at_1000
value: 40.589999999999996
- type: ndcg_at_3
value: 26.985999999999997
- type: ndcg_at_5
value: 29.457
- type: precision_at_1
value: 23.134
- type: precision_at_10
value: 5.8709999999999996
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.852
- type: precision_at_5
value: 9.428
- type: recall_at_1
value: 18.731
- type: recall_at_10
value: 44.419
- type: recall_at_100
value: 67.851
- type: recall_at_1000
value: 88.103
- type: recall_at_3
value: 29.919
- type: recall_at_5
value: 36.230000000000004
- type: map_at_1
value: 30.324
- type: map_at_10
value: 41.265
- type: map_at_100
value: 42.559000000000005
- type: map_at_1000
value: 42.669000000000004
- type: map_at_3
value: 38.138
- type: map_at_5
value: 39.881
- type: mrr_at_1
value: 36.67
- type: mrr_at_10
value: 46.774
- type: mrr_at_100
value: 47.554
- type: mrr_at_1000
value: 47.593
- type: mrr_at_3
value: 44.338
- type: mrr_at_5
value: 45.723
- type: ndcg_at_1
value: 36.67
- type: ndcg_at_10
value: 47.367
- type: ndcg_at_100
value: 52.623
- type: ndcg_at_1000
value: 54.59
- type: ndcg_at_3
value: 42.323
- type: ndcg_at_5
value: 44.727
- type: precision_at_1
value: 36.67
- type: precision_at_10
value: 8.518
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 19.955000000000002
- type: precision_at_5
value: 14.11
- type: recall_at_1
value: 30.324
- type: recall_at_10
value: 59.845000000000006
- type: recall_at_100
value: 81.77499999999999
- type: recall_at_1000
value: 94.463
- type: recall_at_3
value: 46.019
- type: recall_at_5
value: 52.163000000000004
- type: map_at_1
value: 24.229
- type: map_at_10
value: 35.004000000000005
- type: map_at_100
value: 36.409000000000006
- type: map_at_1000
value: 36.521
- type: map_at_3
value: 31.793
- type: map_at_5
value: 33.432
- type: mrr_at_1
value: 30.365
- type: mrr_at_10
value: 40.502
- type: mrr_at_100
value: 41.372
- type: mrr_at_1000
value: 41.435
- type: mrr_at_3
value: 37.804
- type: mrr_at_5
value: 39.226
- type: ndcg_at_1
value: 30.365
- type: ndcg_at_10
value: 41.305
- type: ndcg_at_100
value: 47.028999999999996
- type: ndcg_at_1000
value: 49.375
- type: ndcg_at_3
value: 35.85
- type: ndcg_at_5
value: 38.12
- type: precision_at_1
value: 30.365
- type: precision_at_10
value: 7.808
- type: precision_at_100
value: 1.228
- type: precision_at_1000
value: 0.161
- type: precision_at_3
value: 17.352
- type: precision_at_5
value: 12.42
- type: recall_at_1
value: 24.229
- type: recall_at_10
value: 54.673
- type: recall_at_100
value: 78.766
- type: recall_at_1000
value: 94.625
- type: recall_at_3
value: 39.602
- type: recall_at_5
value: 45.558
- type: map_at_1
value: 26.695
- type: map_at_10
value: 36.0895
- type: map_at_100
value: 37.309416666666664
- type: map_at_1000
value: 37.42558333333334
- type: map_at_3
value: 33.19616666666666
- type: map_at_5
value: 34.78641666666667
- type: mrr_at_1
value: 31.486083333333337
- type: mrr_at_10
value: 40.34774999999999
- type: mrr_at_100
value: 41.17533333333333
- type: mrr_at_1000
value: 41.231583333333326
- type: mrr_at_3
value: 37.90075
- type: mrr_at_5
value: 39.266999999999996
- type: ndcg_at_1
value: 31.486083333333337
- type: ndcg_at_10
value: 41.60433333333334
- type: ndcg_at_100
value: 46.74525
- type: ndcg_at_1000
value: 48.96166666666667
- type: ndcg_at_3
value: 36.68825
- type: ndcg_at_5
value: 38.966499999999996
- type: precision_at_1
value: 31.486083333333337
- type: precision_at_10
value: 7.29675
- type: precision_at_100
value: 1.1621666666666666
- type: precision_at_1000
value: 0.1545
- type: precision_at_3
value: 16.8815
- type: precision_at_5
value: 11.974583333333333
- type: recall_at_1
value: 26.695
- type: recall_at_10
value: 53.651916666666665
- type: recall_at_100
value: 76.12083333333332
- type: recall_at_1000
value: 91.31191666666668
- type: recall_at_3
value: 40.03575
- type: recall_at_5
value: 45.876666666666665
- type: map_at_1
value: 25.668000000000003
- type: map_at_10
value: 32.486
- type: map_at_100
value: 33.371
- type: map_at_1000
value: 33.458
- type: map_at_3
value: 30.261
- type: map_at_5
value: 31.418000000000003
- type: mrr_at_1
value: 28.988000000000003
- type: mrr_at_10
value: 35.414
- type: mrr_at_100
value: 36.149
- type: mrr_at_1000
value: 36.215
- type: mrr_at_3
value: 33.333
- type: mrr_at_5
value: 34.43
- type: ndcg_at_1
value: 28.988000000000003
- type: ndcg_at_10
value: 36.732
- type: ndcg_at_100
value: 41.331
- type: ndcg_at_1000
value: 43.575
- type: ndcg_at_3
value: 32.413
- type: ndcg_at_5
value: 34.316
- type: precision_at_1
value: 28.988000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.882
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 13.65
- type: precision_at_5
value: 9.417
- type: recall_at_1
value: 25.668000000000003
- type: recall_at_10
value: 47.147
- type: recall_at_100
value: 68.504
- type: recall_at_1000
value: 85.272
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 39.925
- type: map_at_1
value: 17.256
- type: map_at_10
value: 24.58
- type: map_at_100
value: 25.773000000000003
- type: map_at_1000
value: 25.899
- type: map_at_3
value: 22.236
- type: map_at_5
value: 23.507
- type: mrr_at_1
value: 20.957
- type: mrr_at_10
value: 28.416000000000004
- type: mrr_at_100
value: 29.447000000000003
- type: mrr_at_1000
value: 29.524
- type: mrr_at_3
value: 26.245
- type: mrr_at_5
value: 27.451999999999998
- type: ndcg_at_1
value: 20.957
- type: ndcg_at_10
value: 29.285
- type: ndcg_at_100
value: 35.003
- type: ndcg_at_1000
value: 37.881
- type: ndcg_at_3
value: 25.063000000000002
- type: ndcg_at_5
value: 26.983
- type: precision_at_1
value: 20.957
- type: precision_at_10
value: 5.344
- type: precision_at_100
value: 0.958
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 11.918
- type: precision_at_5
value: 8.596
- type: recall_at_1
value: 17.256
- type: recall_at_10
value: 39.644
- type: recall_at_100
value: 65.279
- type: recall_at_1000
value: 85.693
- type: recall_at_3
value: 27.825
- type: recall_at_5
value: 32.792
- type: map_at_1
value: 26.700000000000003
- type: map_at_10
value: 36.205999999999996
- type: map_at_100
value: 37.316
- type: map_at_1000
value: 37.425000000000004
- type: map_at_3
value: 33.166000000000004
- type: map_at_5
value: 35.032999999999994
- type: mrr_at_1
value: 31.436999999999998
- type: mrr_at_10
value: 40.61
- type: mrr_at_100
value: 41.415
- type: mrr_at_1000
value: 41.48
- type: mrr_at_3
value: 37.966
- type: mrr_at_5
value: 39.599000000000004
- type: ndcg_at_1
value: 31.436999999999998
- type: ndcg_at_10
value: 41.771
- type: ndcg_at_100
value: 46.784
- type: ndcg_at_1000
value: 49.183
- type: ndcg_at_3
value: 36.437000000000005
- type: ndcg_at_5
value: 39.291
- type: precision_at_1
value: 31.436999999999998
- type: precision_at_10
value: 6.987
- type: precision_at_100
value: 1.072
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.448999999999998
- type: precision_at_5
value: 11.866
- type: recall_at_1
value: 26.700000000000003
- type: recall_at_10
value: 54.301
- type: recall_at_100
value: 75.871
- type: recall_at_1000
value: 92.529
- type: recall_at_3
value: 40.201
- type: recall_at_5
value: 47.208
- type: map_at_1
value: 24.296
- type: map_at_10
value: 33.116
- type: map_at_100
value: 34.81
- type: map_at_1000
value: 35.032000000000004
- type: map_at_3
value: 30.105999999999998
- type: map_at_5
value: 31.839000000000002
- type: mrr_at_1
value: 29.051
- type: mrr_at_10
value: 37.803
- type: mrr_at_100
value: 38.856
- type: mrr_at_1000
value: 38.903999999999996
- type: mrr_at_3
value: 35.211
- type: mrr_at_5
value: 36.545
- type: ndcg_at_1
value: 29.051
- type: ndcg_at_10
value: 39.007
- type: ndcg_at_100
value: 45.321
- type: ndcg_at_1000
value: 47.665
- type: ndcg_at_3
value: 34.1
- type: ndcg_at_5
value: 36.437000000000005
- type: precision_at_1
value: 29.051
- type: precision_at_10
value: 7.668
- type: precision_at_100
value: 1.542
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.897
- type: recall_at_1
value: 24.296
- type: recall_at_10
value: 49.85
- type: recall_at_100
value: 78.457
- type: recall_at_1000
value: 92.618
- type: recall_at_3
value: 36.138999999999996
- type: recall_at_5
value: 42.223
- type: map_at_1
value: 20.591
- type: map_at_10
value: 28.902
- type: map_at_100
value: 29.886000000000003
- type: map_at_1000
value: 29.987000000000002
- type: map_at_3
value: 26.740000000000002
- type: map_at_5
value: 27.976
- type: mrr_at_1
value: 22.366
- type: mrr_at_10
value: 30.971
- type: mrr_at_100
value: 31.865
- type: mrr_at_1000
value: 31.930999999999997
- type: mrr_at_3
value: 28.927999999999997
- type: mrr_at_5
value: 30.231
- type: ndcg_at_1
value: 22.366
- type: ndcg_at_10
value: 33.641
- type: ndcg_at_100
value: 38.477
- type: ndcg_at_1000
value: 41.088
- type: ndcg_at_3
value: 29.486
- type: ndcg_at_5
value: 31.612000000000002
- type: precision_at_1
value: 22.366
- type: precision_at_10
value: 5.3420000000000005
- type: precision_at_100
value: 0.828
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 12.939
- type: precision_at_5
value: 9.094
- type: recall_at_1
value: 20.591
- type: recall_at_10
value: 46.052
- type: recall_at_100
value: 68.193
- type: recall_at_1000
value: 87.638
- type: recall_at_3
value: 34.966
- type: recall_at_5
value: 40.082
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 15.091
- type: map_at_10
value: 26.38
- type: map_at_100
value: 28.421999999999997
- type: map_at_1000
value: 28.621999999999996
- type: map_at_3
value: 21.597
- type: map_at_5
value: 24.12
- type: mrr_at_1
value: 34.266999999999996
- type: mrr_at_10
value: 46.864
- type: mrr_at_100
value: 47.617
- type: mrr_at_1000
value: 47.644
- type: mrr_at_3
value: 43.312
- type: mrr_at_5
value: 45.501000000000005
- type: ndcg_at_1
value: 34.266999999999996
- type: ndcg_at_10
value: 36.095
- type: ndcg_at_100
value: 43.447
- type: ndcg_at_1000
value: 46.661
- type: ndcg_at_3
value: 29.337999999999997
- type: ndcg_at_5
value: 31.824
- type: precision_at_1
value: 34.266999999999996
- type: precision_at_10
value: 11.472
- type: precision_at_100
value: 1.944
- type: precision_at_1000
value: 0.255
- type: precision_at_3
value: 21.933
- type: precision_at_5
value: 17.224999999999998
- type: recall_at_1
value: 15.091
- type: recall_at_10
value: 43.022
- type: recall_at_100
value: 68.075
- type: recall_at_1000
value: 85.76
- type: recall_at_3
value: 26.564
- type: recall_at_5
value: 33.594
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.252
- type: map_at_10
value: 20.923
- type: map_at_100
value: 30.741000000000003
- type: map_at_1000
value: 32.542
- type: map_at_3
value: 14.442
- type: map_at_5
value: 17.399
- type: mrr_at_1
value: 70.25
- type: mrr_at_10
value: 78.17
- type: mrr_at_100
value: 78.444
- type: mrr_at_1000
value: 78.45100000000001
- type: mrr_at_3
value: 76.958
- type: mrr_at_5
value: 77.571
- type: ndcg_at_1
value: 58.375
- type: ndcg_at_10
value: 44.509
- type: ndcg_at_100
value: 49.897999999999996
- type: ndcg_at_1000
value: 57.269999999999996
- type: ndcg_at_3
value: 48.64
- type: ndcg_at_5
value: 46.697
- type: precision_at_1
value: 70.25
- type: precision_at_10
value: 36.05
- type: precision_at_100
value: 11.848
- type: precision_at_1000
value: 2.213
- type: precision_at_3
value: 52.917
- type: precision_at_5
value: 45.7
- type: recall_at_1
value: 9.252
- type: recall_at_10
value: 27.006999999999998
- type: recall_at_100
value: 57.008
- type: recall_at_1000
value: 80.697
- type: recall_at_3
value: 15.798000000000002
- type: recall_at_5
value: 20.4
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 50.88
- type: f1
value: 45.545495028653384
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 75.424
- type: map_at_10
value: 83.435
- type: map_at_100
value: 83.66900000000001
- type: map_at_1000
value: 83.685
- type: map_at_3
value: 82.39800000000001
- type: map_at_5
value: 83.07
- type: mrr_at_1
value: 81.113
- type: mrr_at_10
value: 87.77199999999999
- type: mrr_at_100
value: 87.862
- type: mrr_at_1000
value: 87.86500000000001
- type: mrr_at_3
value: 87.17099999999999
- type: mrr_at_5
value: 87.616
- type: ndcg_at_1
value: 81.113
- type: ndcg_at_10
value: 86.909
- type: ndcg_at_100
value: 87.746
- type: ndcg_at_1000
value: 88.017
- type: ndcg_at_3
value: 85.368
- type: ndcg_at_5
value: 86.28099999999999
- type: precision_at_1
value: 81.113
- type: precision_at_10
value: 10.363
- type: precision_at_100
value: 1.102
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 32.507999999999996
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 75.424
- type: recall_at_10
value: 93.258
- type: recall_at_100
value: 96.545
- type: recall_at_1000
value: 98.284
- type: recall_at_3
value: 89.083
- type: recall_at_5
value: 91.445
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.532
- type: map_at_10
value: 37.141999999999996
- type: map_at_100
value: 39.162
- type: map_at_1000
value: 39.322
- type: map_at_3
value: 32.885
- type: map_at_5
value: 35.093999999999994
- type: mrr_at_1
value: 44.29
- type: mrr_at_10
value: 53.516
- type: mrr_at_100
value: 54.24
- type: mrr_at_1000
value: 54.273
- type: mrr_at_3
value: 51.286
- type: mrr_at_5
value: 52.413
- type: ndcg_at_1
value: 44.29
- type: ndcg_at_10
value: 45.268
- type: ndcg_at_100
value: 52.125
- type: ndcg_at_1000
value: 54.778000000000006
- type: ndcg_at_3
value: 41.829
- type: ndcg_at_5
value: 42.525
- type: precision_at_1
value: 44.29
- type: precision_at_10
value: 12.5
- type: precision_at_100
value: 1.9720000000000002
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 28.035
- type: precision_at_5
value: 20.093
- type: recall_at_1
value: 22.532
- type: recall_at_10
value: 52.419000000000004
- type: recall_at_100
value: 77.43299999999999
- type: recall_at_1000
value: 93.379
- type: recall_at_3
value: 38.629000000000005
- type: recall_at_5
value: 43.858000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.359
- type: map_at_10
value: 63.966
- type: map_at_100
value: 64.87
- type: map_at_1000
value: 64.92599999999999
- type: map_at_3
value: 60.409
- type: map_at_5
value: 62.627
- type: mrr_at_1
value: 78.717
- type: mrr_at_10
value: 84.468
- type: mrr_at_100
value: 84.655
- type: mrr_at_1000
value: 84.661
- type: mrr_at_3
value: 83.554
- type: mrr_at_5
value: 84.133
- type: ndcg_at_1
value: 78.717
- type: ndcg_at_10
value: 72.03399999999999
- type: ndcg_at_100
value: 75.158
- type: ndcg_at_1000
value: 76.197
- type: ndcg_at_3
value: 67.049
- type: ndcg_at_5
value: 69.808
- type: precision_at_1
value: 78.717
- type: precision_at_10
value: 15.201
- type: precision_at_100
value: 1.764
- type: precision_at_1000
value: 0.19
- type: precision_at_3
value: 43.313
- type: precision_at_5
value: 28.165000000000003
- type: recall_at_1
value: 39.359
- type: recall_at_10
value: 76.003
- type: recall_at_100
value: 88.197
- type: recall_at_1000
value: 95.003
- type: recall_at_3
value: 64.97
- type: recall_at_5
value: 70.41199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 92.83200000000001
- type: ap
value: 89.33560571859861
- type: f1
value: 92.82322915005167
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.983
- type: map_at_10
value: 34.259
- type: map_at_100
value: 35.432
- type: map_at_1000
value: 35.482
- type: map_at_3
value: 30.275999999999996
- type: map_at_5
value: 32.566
- type: mrr_at_1
value: 22.579
- type: mrr_at_10
value: 34.882999999999996
- type: mrr_at_100
value: 35.984
- type: mrr_at_1000
value: 36.028
- type: mrr_at_3
value: 30.964999999999996
- type: mrr_at_5
value: 33.245000000000005
- type: ndcg_at_1
value: 22.564
- type: ndcg_at_10
value: 41.258
- type: ndcg_at_100
value: 46.824
- type: ndcg_at_1000
value: 48.037
- type: ndcg_at_3
value: 33.17
- type: ndcg_at_5
value: 37.263000000000005
- type: precision_at_1
value: 22.564
- type: precision_at_10
value: 6.572
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.130999999999998
- type: precision_at_5
value: 10.544
- type: recall_at_1
value: 21.983
- type: recall_at_10
value: 62.775000000000006
- type: recall_at_100
value: 88.389
- type: recall_at_1000
value: 97.603
- type: recall_at_3
value: 40.878
- type: recall_at_5
value: 50.690000000000005
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.95120839033288
- type: f1
value: 93.73824125055208
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.78978568171455
- type: f1
value: 57.50180552858304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.24411566913248
- type: f1
value: 74.37851403532832
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.94620040349699
- type: f1
value: 80.21293397970435
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 33.44403096245675
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.659594631336812
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 32.53833075108798
- type: mrr
value: 33.78840823218308
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 7.185999999999999
- type: map_at_10
value: 15.193999999999999
- type: map_at_100
value: 19.538
- type: map_at_1000
value: 21.178
- type: map_at_3
value: 11.208
- type: map_at_5
value: 12.745999999999999
- type: mrr_at_1
value: 48.916
- type: mrr_at_10
value: 58.141
- type: mrr_at_100
value: 58.656
- type: mrr_at_1000
value: 58.684999999999995
- type: mrr_at_3
value: 55.521
- type: mrr_at_5
value: 57.239
- type: ndcg_at_1
value: 47.059
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 36.272999999999996
- type: ndcg_at_1000
value: 44.996
- type: ndcg_at_3
value: 43.293
- type: ndcg_at_5
value: 40.819
- type: precision_at_1
value: 48.916
- type: precision_at_10
value: 28.607
- type: precision_at_100
value: 9.195
- type: precision_at_1000
value: 2.225
- type: precision_at_3
value: 40.454
- type: precision_at_5
value: 34.985
- type: recall_at_1
value: 7.185999999999999
- type: recall_at_10
value: 19.654
- type: recall_at_100
value: 37.224000000000004
- type: recall_at_1000
value: 68.663
- type: recall_at_3
value: 12.158
- type: recall_at_5
value: 14.674999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.552000000000003
- type: map_at_10
value: 47.75
- type: map_at_100
value: 48.728
- type: map_at_1000
value: 48.754
- type: map_at_3
value: 43.156
- type: map_at_5
value: 45.883
- type: mrr_at_1
value: 35.66
- type: mrr_at_10
value: 50.269
- type: mrr_at_100
value: 50.974
- type: mrr_at_1000
value: 50.991
- type: mrr_at_3
value: 46.519
- type: mrr_at_5
value: 48.764
- type: ndcg_at_1
value: 35.632000000000005
- type: ndcg_at_10
value: 55.786
- type: ndcg_at_100
value: 59.748999999999995
- type: ndcg_at_1000
value: 60.339
- type: ndcg_at_3
value: 47.292
- type: ndcg_at_5
value: 51.766999999999996
- type: precision_at_1
value: 35.632000000000005
- type: precision_at_10
value: 9.267
- type: precision_at_100
value: 1.149
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 21.601
- type: precision_at_5
value: 15.539
- type: recall_at_1
value: 31.552000000000003
- type: recall_at_10
value: 77.62400000000001
- type: recall_at_100
value: 94.527
- type: recall_at_1000
value: 98.919
- type: recall_at_3
value: 55.898
- type: recall_at_5
value: 66.121
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.414
- type: map_at_10
value: 85.37400000000001
- type: map_at_100
value: 86.01100000000001
- type: map_at_1000
value: 86.027
- type: map_at_3
value: 82.562
- type: map_at_5
value: 84.284
- type: mrr_at_1
value: 82.24000000000001
- type: mrr_at_10
value: 88.225
- type: mrr_at_100
value: 88.324
- type: mrr_at_1000
value: 88.325
- type: mrr_at_3
value: 87.348
- type: mrr_at_5
value: 87.938
- type: ndcg_at_1
value: 82.24000000000001
- type: ndcg_at_10
value: 88.97699999999999
- type: ndcg_at_100
value: 90.16
- type: ndcg_at_1000
value: 90.236
- type: ndcg_at_3
value: 86.371
- type: ndcg_at_5
value: 87.746
- type: precision_at_1
value: 82.24000000000001
- type: precision_at_10
value: 13.481000000000002
- type: precision_at_100
value: 1.534
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.86
- type: precision_at_5
value: 24.738
- type: recall_at_1
value: 71.414
- type: recall_at_10
value: 95.735
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.979
- type: recall_at_3
value: 88.105
- type: recall_at_5
value: 92.17999999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 60.22146692057259
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 65.29273320614578
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.023
- type: map_at_10
value: 14.161000000000001
- type: map_at_100
value: 16.68
- type: map_at_1000
value: 17.072000000000003
- type: map_at_3
value: 9.763
- type: map_at_5
value: 11.977
- type: mrr_at_1
value: 24.8
- type: mrr_at_10
value: 37.602999999999994
- type: mrr_at_100
value: 38.618
- type: mrr_at_1000
value: 38.659
- type: mrr_at_3
value: 34.117
- type: mrr_at_5
value: 36.082
- type: ndcg_at_1
value: 24.8
- type: ndcg_at_10
value: 23.316
- type: ndcg_at_100
value: 32.613
- type: ndcg_at_1000
value: 38.609
- type: ndcg_at_3
value: 21.697
- type: ndcg_at_5
value: 19.241
- type: precision_at_1
value: 24.8
- type: precision_at_10
value: 12.36
- type: precision_at_100
value: 2.593
- type: precision_at_1000
value: 0.402
- type: precision_at_3
value: 20.767
- type: precision_at_5
value: 17.34
- type: recall_at_1
value: 5.023
- type: recall_at_10
value: 25.069999999999997
- type: recall_at_100
value: 52.563
- type: recall_at_1000
value: 81.525
- type: recall_at_3
value: 12.613
- type: recall_at_5
value: 17.583
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 87.71506247604255
- type: cos_sim_spearman
value: 82.91813463738802
- type: euclidean_pearson
value: 85.5154616194479
- type: euclidean_spearman
value: 82.91815254466314
- type: manhattan_pearson
value: 85.5280917850374
- type: manhattan_spearman
value: 82.92276537286398
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.43772054228462
- type: cos_sim_spearman
value: 78.75750601716682
- type: euclidean_pearson
value: 85.76074482955764
- type: euclidean_spearman
value: 78.75651057223058
- type: manhattan_pearson
value: 85.73390291701668
- type: manhattan_spearman
value: 78.72699385957797
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 89.58144067172472
- type: cos_sim_spearman
value: 90.3524512966946
- type: euclidean_pearson
value: 89.71365391594237
- type: euclidean_spearman
value: 90.35239632843408
- type: manhattan_pearson
value: 89.66905421746478
- type: manhattan_spearman
value: 90.31508211683513
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 87.77692637102102
- type: cos_sim_spearman
value: 85.45710562643485
- type: euclidean_pearson
value: 87.42456979928723
- type: euclidean_spearman
value: 85.45709386240908
- type: manhattan_pearson
value: 87.40754529526272
- type: manhattan_spearman
value: 85.44834854173303
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.28491331695997
- type: cos_sim_spearman
value: 89.62037029566964
- type: euclidean_pearson
value: 89.02479391362826
- type: euclidean_spearman
value: 89.62036733618466
- type: manhattan_pearson
value: 89.00394756040342
- type: manhattan_spearman
value: 89.60867744215236
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.08911381280191
- type: cos_sim_spearman
value: 86.5791780765767
- type: euclidean_pearson
value: 86.16063473577861
- type: euclidean_spearman
value: 86.57917745378766
- type: manhattan_pearson
value: 86.13677924604175
- type: manhattan_spearman
value: 86.56115615768685
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.58029496205235
- type: cos_sim_spearman
value: 89.49551253826998
- type: euclidean_pearson
value: 90.13714840963748
- type: euclidean_spearman
value: 89.49551253826998
- type: manhattan_pearson
value: 90.13039633601363
- type: manhattan_spearman
value: 89.4513453745516
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.01546399666435
- type: cos_sim_spearman
value: 69.33824484595624
- type: euclidean_pearson
value: 70.76511642998874
- type: euclidean_spearman
value: 69.33824484595624
- type: manhattan_pearson
value: 70.84320785047453
- type: manhattan_spearman
value: 69.54233632223537
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.26389196390119
- type: cos_sim_spearman
value: 89.09721478341385
- type: euclidean_pearson
value: 88.97208685922517
- type: euclidean_spearman
value: 89.09720927308881
- type: manhattan_pearson
value: 88.97513670502573
- type: manhattan_spearman
value: 89.07647853984004
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.53075025771936
- type: mrr
value: 96.24327651288436
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.428000000000004
- type: map_at_10
value: 70.088
- type: map_at_100
value: 70.589
- type: map_at_1000
value: 70.614
- type: map_at_3
value: 67.191
- type: map_at_5
value: 68.515
- type: mrr_at_1
value: 63.333
- type: mrr_at_10
value: 71.13000000000001
- type: mrr_at_100
value: 71.545
- type: mrr_at_1000
value: 71.569
- type: mrr_at_3
value: 68.944
- type: mrr_at_5
value: 70.078
- type: ndcg_at_1
value: 63.333
- type: ndcg_at_10
value: 74.72800000000001
- type: ndcg_at_100
value: 76.64999999999999
- type: ndcg_at_1000
value: 77.176
- type: ndcg_at_3
value: 69.659
- type: ndcg_at_5
value: 71.626
- type: precision_at_1
value: 63.333
- type: precision_at_10
value: 10
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 17.666999999999998
- type: recall_at_1
value: 60.428000000000004
- type: recall_at_10
value: 87.98899999999999
- type: recall_at_100
value: 96.167
- type: recall_at_1000
value: 100
- type: recall_at_3
value: 74.006
- type: recall_at_5
value: 79.05
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.87326732673267
- type: cos_sim_ap
value: 96.81770773701805
- type: cos_sim_f1
value: 93.6318407960199
- type: cos_sim_precision
value: 93.16831683168317
- type: cos_sim_recall
value: 94.1
- type: dot_accuracy
value: 99.87326732673267
- type: dot_ap
value: 96.8174218946665
- type: dot_f1
value: 93.6318407960199
- type: dot_precision
value: 93.16831683168317
- type: dot_recall
value: 94.1
- type: euclidean_accuracy
value: 99.87326732673267
- type: euclidean_ap
value: 96.81770773701807
- type: euclidean_f1
value: 93.6318407960199
- type: euclidean_precision
value: 93.16831683168317
- type: euclidean_recall
value: 94.1
- type: manhattan_accuracy
value: 99.87227722772278
- type: manhattan_ap
value: 96.83164126821747
- type: manhattan_f1
value: 93.54677338669335
- type: manhattan_precision
value: 93.5935935935936
- type: manhattan_recall
value: 93.5
- type: max_accuracy
value: 99.87326732673267
- type: max_ap
value: 96.83164126821747
- type: max_f1
value: 93.6318407960199
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.6212042420246
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.779230635982564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.217701909036286
- type: mrr
value: 56.17658995416349
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.954206018888453
- type: cos_sim_spearman
value: 32.71062599450096
- type: dot_pearson
value: 30.95420929056943
- type: dot_spearman
value: 32.71062599450096
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22699999999999998
- type: map_at_10
value: 1.924
- type: map_at_100
value: 10.525
- type: map_at_1000
value: 24.973
- type: map_at_3
value: 0.638
- type: map_at_5
value: 1.0659999999999998
- type: mrr_at_1
value: 84
- type: mrr_at_10
value: 91.067
- type: mrr_at_100
value: 91.067
- type: mrr_at_1000
value: 91.067
- type: mrr_at_3
value: 90.667
- type: mrr_at_5
value: 91.067
- type: ndcg_at_1
value: 81
- type: ndcg_at_10
value: 75.566
- type: ndcg_at_100
value: 56.387
- type: ndcg_at_1000
value: 49.834
- type: ndcg_at_3
value: 80.899
- type: ndcg_at_5
value: 80.75099999999999
- type: precision_at_1
value: 84
- type: precision_at_10
value: 79
- type: precision_at_100
value: 57.56
- type: precision_at_1000
value: 21.8
- type: precision_at_3
value: 84.667
- type: precision_at_5
value: 85.2
- type: recall_at_1
value: 0.22699999999999998
- type: recall_at_10
value: 2.136
- type: recall_at_100
value: 13.861
- type: recall_at_1000
value: 46.299
- type: recall_at_3
value: 0.6649999999999999
- type: recall_at_5
value: 1.145
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.752
- type: map_at_10
value: 9.951
- type: map_at_100
value: 16.794999999999998
- type: map_at_1000
value: 18.251
- type: map_at_3
value: 5.288
- type: map_at_5
value: 6.954000000000001
- type: mrr_at_1
value: 38.775999999999996
- type: mrr_at_10
value: 50.458000000000006
- type: mrr_at_100
value: 51.324999999999996
- type: mrr_at_1000
value: 51.339999999999996
- type: mrr_at_3
value: 46.939
- type: mrr_at_5
value: 47.857
- type: ndcg_at_1
value: 36.735
- type: ndcg_at_10
value: 25.198999999999998
- type: ndcg_at_100
value: 37.938
- type: ndcg_at_1000
value: 49.145
- type: ndcg_at_3
value: 29.348000000000003
- type: ndcg_at_5
value: 25.804
- type: precision_at_1
value: 38.775999999999996
- type: precision_at_10
value: 22.041
- type: precision_at_100
value: 7.939
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.752
- type: recall_at_10
value: 16.197
- type: recall_at_100
value: 49.166
- type: recall_at_1000
value: 84.18900000000001
- type: recall_at_3
value: 6.438000000000001
- type: recall_at_5
value: 9.093
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.47980000000001
- type: ap
value: 14.605194452178754
- type: f1
value: 55.07362924988948
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.708545557441994
- type: f1
value: 60.04751270975683
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.21105960597211
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.58419264469214
- type: cos_sim_ap
value: 78.55300004517404
- type: cos_sim_f1
value: 71.49673530889001
- type: cos_sim_precision
value: 68.20795400095831
- type: cos_sim_recall
value: 75.11873350923483
- type: dot_accuracy
value: 87.58419264469214
- type: dot_ap
value: 78.55297659559511
- type: dot_f1
value: 71.49673530889001
- type: dot_precision
value: 68.20795400095831
- type: dot_recall
value: 75.11873350923483
- type: euclidean_accuracy
value: 87.58419264469214
- type: euclidean_ap
value: 78.55300477331477
- type: euclidean_f1
value: 71.49673530889001
- type: euclidean_precision
value: 68.20795400095831
- type: euclidean_recall
value: 75.11873350923483
- type: manhattan_accuracy
value: 87.5663110210407
- type: manhattan_ap
value: 78.49982050876562
- type: manhattan_f1
value: 71.35488740722104
- type: manhattan_precision
value: 68.18946862226497
- type: manhattan_recall
value: 74.82849604221636
- type: max_accuracy
value: 87.58419264469214
- type: max_ap
value: 78.55300477331477
- type: max_f1
value: 71.49673530889001
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.09069740365584
- type: cos_sim_ap
value: 86.22749303724757
- type: cos_sim_f1
value: 78.36863452005407
- type: cos_sim_precision
value: 76.49560117302053
- type: cos_sim_recall
value: 80.33569448721897
- type: dot_accuracy
value: 89.09069740365584
- type: dot_ap
value: 86.22750233655673
- type: dot_f1
value: 78.36863452005407
- type: dot_precision
value: 76.49560117302053
- type: dot_recall
value: 80.33569448721897
- type: euclidean_accuracy
value: 89.09069740365584
- type: euclidean_ap
value: 86.22749355597347
- type: euclidean_f1
value: 78.36863452005407
- type: euclidean_precision
value: 76.49560117302053
- type: euclidean_recall
value: 80.33569448721897
- type: manhattan_accuracy
value: 89.08293553770326
- type: manhattan_ap
value: 86.21913616084771
- type: manhattan_f1
value: 78.3907031479847
- type: manhattan_precision
value: 75.0352013517319
- type: manhattan_recall
value: 82.06036341238065
- type: max_accuracy
value: 89.09069740365584
- type: max_ap
value: 86.22750233655673
- type: max_f1
value: 78.3907031479847
---
# magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF
This model was converted to GGUF format from [`mixedbread-ai/mxbai-embed-large-v1`](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF --hf-file mxbai-embed-large-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF --hf-file mxbai-embed-large-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF --hf-file mxbai-embed-large-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo magicunicorn/mxbai-embed-large-v1-Q8_0-GGUF --hf-file mxbai-embed-large-v1-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
pszemraj/long-t5-tglobal-base-16384-booksum-V12 | pszemraj | summarization | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"summary",
"booksum",
"long-document",
"long-form",
"dataset:kmfoda/booksum",
"license:apache-2.0",
"license:bsd-3-clause",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-09T20:12:39 | 2023-06-30T06:16:31 | 55 | 4 | ---
datasets:
- kmfoda/booksum
license:
- apache-2.0
- bsd-3-clause
metrics:
- rouge
tags:
- summarization
- summary
- booksum
- long-document
- long-form
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
length_penalty: 0.3
encoder_no_repeat_ngram_size: 3
num_beams: 4
model-index:
- name: pszemraj/long-t5-tglobal-base-16384-booksum-V12
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- type: rouge
value: 30.0032
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk2MTRiNDljZTM4NzliNDdmMTdkZGY3MGY4OTVmMzFhOTdjNGFjYjJhYTBjYTI4Y2VkOGMxYWI5M2M3YWEyZSIsInZlcnNpb24iOjF9.cZtcCwB1Bnnn1g4x8Ia_8oTSK89feGF80r20jwjSb-xy5Xt3eR3dOVjJyjurfN0UOGyEe7inTpneJhcAoRwwBg
- type: rouge
value: 7.2671
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNThiYmJhN2NkYmU0MmZmZGY5MGU2NmEzZGQwNjM0MDEwNzlhNDgzY2E2MzkxMWVkZTUwMWFlZmFhYWEwN2M5ZSIsInZlcnNpb24iOjF9.IaaaHiOxUdh6IDGbb2vCCEcL-YhXCtaFlZnIpcgQwsC3KRgfrpQi5vdhyaaIJSieA2pzbFjUO--WqjylvpysCA
- type: rouge
value: 21.8779
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTc1N2YwODk4YmU1Mjk3NGQ2ZDVkYWVjN2Y1ZDVlOTNkMjU5MjcyYjY0ZWY5NjJkNzZjNjMwZWUxNWY0NTY1ZiIsInZlcnNpb24iOjF9.HhYA0t2Ee3YhtBDPneU7hzEEz5c4FeBcTo-3TSSClltG3A5E3RIgbxUbQNbldRAL9Y44Z8uzEHfe676eL22vBg
- type: rouge
value: 26.4371
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTJmZmJhZTBiZDczYmNkNWQ0MGQ3ZTIyNzc2NGExMGY1MGNkOThlNDg0OWQ3YWFmNDRmYTUxZTYzN2U5Yzc4MCIsInZlcnNpb24iOjF9.fgr8NNlhDCvtXMudOce1pf_slujIhXAEC3a6fH6AAlgIvzxg1oGV5QiUcrPDNhyFD2XazZ39Xk1GhoMk4AnxAQ
- type: loss
value: 2.6383285522460938
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjRiMjAyMjJkM2M5NGZjYzRiZGFlNTJhM2UyNjExODlmNjM4NjRmZTRlMWEzMTUzYTI2NjYzYTAyNmVlYjJjMCIsInZlcnNpb24iOjF9.wKAqpXyvHNGDpxwLmR6mzI4gRwVQI88uFJZJoRAWQD_d-H97y5cpP4VSBes_YfVpFpYzEF8miN9fv660xukiBA
- type: gen_len
value: 54.2357
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzA1Y2IxN2Q4OGU0N2FkNDFmNTFmMjQwZDA4MTczMDJmNWIyMjdhYzhkNTE5ZjI4M2NjZTdkMmUwMTFjMzk1ZCIsInZlcnNpb24iOjF9.JuADjJNIcaqmZTw1RFnklHJYEYfTEKQ0YnmvL1TmvSihIVJORbK-3cFkJLVJdyaaRq40HjhQRw6mmpur9Lq1CQ
- task:
type: summarization
name: Summarization
dataset:
name: launch/gov_report
type: launch/gov_report
config: plain_text
split: test
metrics:
- type: rouge
value: 37.0538
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzViY2Y2ZWIwMDdhNDEzMDU3MmE4ZTBlZjQ2MDI2YTVjOGZjZDM5NzhiZDk2MWJhZWY5MDUwY2NhZTY2OTc5ZSIsInZlcnNpb24iOjF9.p2z_oZD9uVTnBtf7vRRKvisW-rXWVibpU0QQ-S_16CIYLc2kTJRZMLzaMJqbi1d8icBTeG5PdIzKcAVwu7JKCA
- type: rouge
value: 8.1512
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWUzZGM0ZGJiMDYwM2ZmYjI5Mzk5MTU2N2JlZGVlOGRjMTJjY2QwOWIwMjgyMjM0ZjIzY2Q4MzJjNDkxZmVhMCIsInZlcnNpb24iOjF9.z6pMF8l4uMQIEcdyU1kgDc1v3rCn-0TVxntKP3hmOEwRJqfbeqDmhhAROWadYTPNewpfsCpShVHGJt9DvH55BQ
- type: rouge
value: 17.6645
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWNkYzY2NGY4YmFiNWRhODAwZmFmOTkzM2M3MGY0ZTQzZTUwNmExNDc5ZDdhZWVhZjFhYTUyYjFlZjQ3ZDA4ZCIsInZlcnNpb24iOjF9.XbVCDhR_l7OalwF2DsHJSZ39z_HHdG3PlwKL0Ls9lBvRo4E8sk00vrQy4IRCqPF8hPJusl2Nb65V3CvgIldqAA
- type: rouge
value: 33.4275
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDdiYzI0MDlmYjg0MWFjZDBmMmIyZWUyNzNhYTUyNTU1ZDdhODE4ZTlmMTg5MDY1MDhhMGRlMGU1OTA3YzM4ZSIsInZlcnNpb24iOjF9.pDHKUDMXHihmLSQzYq6bxclcLyajcRf6Q5ImhpvpoepG8du5ggwb1q_2anGfDjJ0kkFa-Iwtbl8KmdqD7TTCAQ
- type: loss
value: 2.6052205562591553
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjk0YWNjMjkxZjUwMDBlODNkNjE0ZWRkYzYxZmRjNjBhMmVjNTE2OWFkZTU1OTYzMzMxNzdkMGFlODVjOWVkNCIsInZlcnNpb24iOjF9.n-p8JJBe9nOsKwvS2CHO6HBiI6b-0dUZuVaL9aQgX_qFhETvwR_gHggWXU6sCiLCzkElH6ZpGpcMw9AogJWkCw
- type: gen_len
value: 201.5951
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzMyYWViNDNjMzY2NmQyZjI5MWU2ZjMwMmYyOGFkMzM0YzgwMzg5ZDhmYzYzYzg0OTMzOWY5ZDRiM2NkNWViOSIsInZlcnNpb24iOjF9.6T6C1dimUVOHNbqm5drVZmiWVrQEC0VBc7nSAiyLm2K3WE99FisSByk4zhBtUf_CntT_TZm1dBpfTaAUVPDOAQ
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 36.1423
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTZkYTA5N2FhNjVhMzg1ZDRjOThhZjcwMjdmYzQ1MGE5N2RhNTM0MmNjMzVkYjNlYmZjOGZjMDFlZDBkMGM5MSIsInZlcnNpb24iOjF9.odQ-NMcQ06o2mqzXOfGY1c967_RUfg93YfGnMTpKUXPM5dGawkdVYGO8rPCHt5bttPvYlBmRgNl6Z7H_OhgnCA
- type: rouge
value: 5.634
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZmFkODViOTg2MDYxZDhlMjZiOTNjZWE2ZTI5YmVhYWRiNGM1OTAzZDEzN2Y1ODI4OWI3NzU2ZmZlMGJjNGIyZiIsInZlcnNpb24iOjF9.4-VpnxVDiC0AG-de1dFr6VHNNbK2qZhAMQ62EpVU7Et-n25w8GPcoyr9l4AXIodQpU6p0H0pdntEUqQwJOHaDg
- type: rouge
value: 16.3747
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzkzYWY1NmEyMWNkODQ2N2ExYzMwNWExZDgwNTkxMTg5OTNjYjU5NjMwNWU3NzZhZDYwYzA4M2I0ZmU3Yjg2NiIsInZlcnNpb24iOjF9.tY2mQ0bZU9GMYYTJPot_vgvmiAoubdYWAzEQSQskigleh7AWtsXbO2CnhBsE_7UpsLPVWGccP0IWkHdHRg9zAA
- type: rouge
value: 33.0665
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTEyZGZlNmRhNjllMGExZTJhOWE0NDQwN2Q3MjQyZmM5OGZjZDQwMGE4MGRiMjJmMWVmNjc2ZTQwOWFlMTdmNyIsInZlcnNpb24iOjF9.W1bgFs6XhmbeWJlX_6IvWx6MX-yUj5ErdBU1cGAAZRrEA0elBa_-FdbRkwnLDcBNmBm16vtxPAQfQgJQXmIcDA
- type: loss
value: 2.454127550125122
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTQ0OGMyZGNmZjVlMDYzOTA1NjdlZjZhOThhN2M3ZTZjNWM5N2Y2MjQwZjg4Y2E4MjhiOWUzODFiMzY1YzU0NyIsInZlcnNpb24iOjF9.TOjsyBEWqDD5N9FzJPE9Z7Poj0oXefGryUy7rgj4uXbbWb8DMsMXMcxNVEKixG_vbGyFyASSmgyeW6bAFHaPCw
- type: gen_len
value: 239.4179
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGZmOWY5NmMyNjUzZDM2NmNjNzBjMzU2OTMxYWE2MGFhM2JiMmFmNzQwOTg4NGY5Yzc1NmZjNGZmZjM5NWQzNyIsInZlcnNpb24iOjF9.piE6u39D58dKz2HimpE4Fng7cHELJPuSpZaoEU3gOXSXYw_lx2KQhi2VfFg-mUasmLuQn4bBvMJcWXyBTY8YBw
- task:
type: summarization
name: Summarization
dataset:
name: big_patent
type: big_patent
config: y
split: test
metrics:
- type: rouge
value: 35.615
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWM4ZWQxMjBmNzFlYWMwODg5YTEzOWRmYzBiNmI4ZjBmNmFiZjk2NWQxNDFmY2QzNTA3ZTc5ODZkNmJkZGE4NSIsInZlcnNpb24iOjF9.MABjYbSyTQrT0QxzXM9VRpdDb5dchk1GI_TD_NSB27ozZdWEXyZ-dp44jR-M9mJTSsGk60czxmCF1gq-e4YhAQ
- type: rouge
value: 8.2625
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTk3MmI3ZmQyOTlmYzc4YTkwNjBjOTM3YmE5NjQxOGVkMDFlODc4YjgxMzlhNGRkYThkMzQ5OTU4YWFjYTg0NiIsInZlcnNpb24iOjF9.KHipwLhPWwc55GQpvNe3bSrKOgaAs4sFvLEGvzVa4HWWyvz4oX2ZaytYnURH9Xid7d9nTr7zWYYiwQ7TmSXPDA
- type: rouge
value: 19.9883
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTlhZDk5ZmEyYzgxY2IyNWI1MTk1Nzg2YmVlNmRhMjcyZmFmMWZkNGQ4OWEwYjQwYTk3YzllODdiNzRkN2M5ZCIsInZlcnNpb24iOjF9.ah1-tJ5rUuUToNUHUMf9v9_TGJdhffBMdPDthvo3fmKcFtUQFAMwIloGLp0ePcCS_h8IMEyrtpMwqcDc7jrgAw
- type: rouge
value: 30.1801
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzViMzBiY2I2NWNkMjJmMmZhOTk2YzY3NTFhZTIxOTAzY2ZmNmJlYTlmZDI4YjAyYmRiNDRlNTk0MWJjMmY1MCIsInZlcnNpb24iOjF9.KUPyHMK77clPtJHyXR5WirKcy5O5hZP-MBZE-gFRy21S_sIsHpZNnBuGTJ6AMVi_38MNvDgLQWwSE-4y9eG8Dg
- type: loss
value: 2.8106656074523926
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjA1ZTk2NzA5NDUwMjQ1ZDcxZTA0ZTA3YzdjYzhhZWM1ZjI3MTllYTg2YzAxOTk0Nzk1Yjc0OTRiNzIyOWExZSIsInZlcnNpb24iOjF9.q2sdYyFeFxpjGPKGpJDnoOmzTznwA1Z99GBWOHA-9YUI5q_w_kbV8JdfbiQ9GsaN8EqDlmkCL2kv5lC3xvvUAA
- type: gen_len
value: 170.3483
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2MxNWFjYTg1Yjc3YmNjMjViYjM5ZDdmY2NhNjFjMWQxYWQwOWI3NTczY2M5ZWVmMGM2MmQ0ZmY3M2Y0MDEwZiIsInZlcnNpb24iOjF9.J80uRlSZCVIsvyVkO8rqQ4vyZrgBMu1YpOckAzIaj_jTWKGaOPM3kj6sSePiEN8OLZYwDueqLsKkPa0B6ZXIBw
---
# pszemraj/long-t5-tglobal-base-16384-booksum-V12
> this checkpoint has some further training and is **exists separately to confirm metrics before merging to main**
- training metadata data in [this json](training_metadata.json)
- the main model can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary)
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"BEAR"
] |
medspaner/roberta-es-clinical-trials-misc-ents-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-01-15T08:17:26 | 2025-03-14T21:29:18 | 55 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: 'Motivo de consulta: migraña leve. Exploración: Tensión arterial: 120/70 mmHg.'
model-index:
- name: roberta-es-clinical-trials-misc-ents-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-misc-ents-ner
This medical named entity recognition model detects the following clinical entities:
- Concept: e.g. *fecha de inclusión*, 'inclusion date'.
- Food\_or\_Dring: e.g. *soja*, 'soy'; *leche*, 'milk'.
- Observation\_or\_Finding: e.g. *normotenso*, 'normal tension'.
- Quantifier\_or\_Qualifier: e.g. *grave*, 'severe'.
- Result\_or\_Value: e.g. *< 3 LNS*, '< 3 UNL'.
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.685 (±0.008)
- Recall: 0.669 (±0.004)
- F1: 0.677 (±0.003)
- Accuracy: 0.959 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct medical named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2025,
title = {{Hybrid natural language processing tool for semantic annotation of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
volume = {26(7)},
year={2025},
doi={https://doi.org/10.1186/s12859-024-05949-6},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 16.80 epochs (±3.56); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.685 (±0.008) | 0.669 (±0.004) | 0.677 (±0.003) | 0.959 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:-------------------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Concept | 0.644 (±0.016) | 0.612 (±0.019) | 0.627 (±0.009) | 764 |
| Food\_or\_Drink | 0.692 (±0.049) | 0.733 (±0.071) | 0.712 (±0.058) | 27 |
| Observation\_or\_Finding | 0.626 (±0.015) | 0.617 (±0.010) | 0.621 (±0.010) | 822 |
| Quantifier\_or\_Qualifier | 0.700 (±0.015) | 0.661 (±0.020) | 0.680 (±0.008) | 1202 |
| Result\_or\_Value | 0.828 (±0.013) | 0.910 (±0.005) | 0.867 (±0.007) | 394 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
rjnClarke/intfloat-multilingual-e5-small-fine-tuned | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10359",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T13:11:54 | 2024-08-06T13:12:39 | 55 | 0 | ---
base_model: intfloat/multilingual-e5-small
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@3
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@200
- cosine_map@100
- dot_accuracy@3
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@200
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10359
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Cleopatra reacts to the news of Antony's death with a mixture of
sadness and resignation, contemplating her own mortality and the fickle nature
of life.
sentences:
- "Immortal longings in me. Now no more The juice of Egypt's grape shall moist\
\ this lip. Yare, yare, good Iras; quick. Methinks I hear Antony call. I\
\ see him rouse himself To praise my noble act. I hear him mock The luck\
\ of Caesar, which the gods give men To excuse their after wrath. Husband,\
\ I come. Now to that name my courage prove my title! I am fire and air;\
\ my other elements I give to baser life. So, have you done? Come then,\
\ and take the last warmth of my lips. Farewell, kind Charmian. Iras, long\
\ farewell. [Kisses them. IRAS falls and dies] \
\ Have I the aspic in my lips? Dost fall? If thus thou and nature can so gently\
\ part, The stroke of death is as a lover's pinch, Which hurts and is desir'd.\
\ Dost thou lie still? If thou vanishest, thou tell'st the world It is\
\ not worth leave-taking. CHARMIAN. Dissolve, thick cloud, and rain, that I may\
\ say The gods themselves do weep. CLEOPATRA. This proves me base.\n \
\ If she first meet the curled Antony,\n"
- "BURGUNDY. Warlike and martial Talbot, Burgundy\n Enshrines thee in his heart,\
\ and there erects Thy noble deeds as valour's monuments. TALBOT. Thanks,\
\ gentle Duke. But where is Pucelle now? I think her old familiar is asleep.\
\ Now where's the Bastard's braves, and Charles his gleeks? What, all amort?\
\ Rouen hangs her head for grief That such a valiant company are fled. Now\
\ will we take some order in the town, Placing therein some expert officers;\
\ And then depart to Paris to the King, For there young Henry with his nobles\
\ lie. BURGUNDY. What Lord Talbot pleaseth Burgundy. TALBOT. But yet, before\
\ we go, let's not forget The noble Duke of Bedford, late deceas'd, But\
\ see his exequies fulfill'd in Rouen. A braver soldier never couched lance,\
\ A gentler heart did never sway in court; But kings and mightiest potentates\
\ must die, For that's the end of human misery. Exeunt\n"
- "Your suffering in this dearth, you may as well\n Strike at the heaven with\
\ your staves as lift them Against the Roman state; whose course will on \
\ The way it takes, cracking ten thousand curbs Of more strong link asunder\
\ than can ever Appear in your impediment. For the dearth, The gods, not\
\ the patricians, make it, and Your knees to them, not arms, must help. Alack,\
\ You are transported by calamity Thither where more attends you; and you\
\ slander The helms o' th' state, who care for you like fathers, When you\
\ curse them as enemies. FIRST CITIZEN. Care for us! True, indeed! They ne'er\
\ car'd for us yet. Suffer us to famish, and their storehouses cramm'd with\
\ grain; make edicts for usury, to support usurers; repeal daily any wholesome\
\ act established against the rich, and provide more piercing statutes daily\
\ to chain up and restrain the poor. If the wars eat us not up, they will;\
\ and there's all the love they bear us. MENENIUS. Either you must Confess\
\ yourselves wondrous malicious, Or be accus'd of folly. I shall tell you \
\ A pretty tale. It may be you have heard it; But, since it serves my purpose,\
\ I will venture To stale't a little more. FIRST CITIZEN. Well, I'll hear\
\ it, sir; yet you must not think to fob off our disgrace with a tale. But,\
\ an't please you, deliver. MENENIUS. There was a time when all the body's members\
\ Rebell'd against the belly; thus accus'd it: That only like a gulf it\
\ did remain I' th' midst o' th' body, idle and unactive, Still cupboarding\
\ the viand, never bearing Like labour with the rest; where th' other instruments\
\ Did see and hear, devise, instruct, walk, feel,\n And, mutually participate,\
\ did minister\n"
- source_sentence: How does the excerpt reflect themes of loyalty and sacrifice in
the play?
sentences:
- "me a thousand marks in links and torches, walking with thee in\n the night\
\ betwixt tavern and tavern; but the sack that thou hast drunk me would have\
\ bought me lights as good cheap at the dearest chandler's in Europe. I have\
\ maintained that salamander of yours with fire any time this two-and-thirty\
\ years. God reward me for it! Bard. 'Sblood, I would my face were in your\
\ belly! Fal. God-a-mercy! so should I be sure to be heart-burn'd.\n \
\ Enter Hostess. How now, Dame Partlet the hen? Have you enquir'd\
\ yet who pick'd\n my pocket? Host. Why, Sir John, what do you think, Sir\
\ John? Do you think I keep thieves in my house? I have search'd, I have enquired,\
\ so has my husband, man by man, boy by boy, servant by servant. The tithe\
\ of a hair was never lost in my house before. Fal. Ye lie, hostess. Bardolph\
\ was shav'd and lost many a hair, and I'll be sworn my pocket was pick'd.\
\ Go to, you are a woman, go! Host. Who, I? No; I defy thee! God's light, I was\
\ never call'd so in mine own house before! Fal. Go to, I know you well enough.\
\ Host. No, Sir John; you do not know me, Sir John. I know you, Sir John.\
\ You owe me money, Sir John, and now you pick a quarrel to beguile me of\
\ it. I bought you a dozen of shirts to your back. Fal. Dowlas, filthy dowlas!\
\ I have given them away to bakers' wives; they have made bolters of them.\
\ Host. Now, as I am a true woman, holland of eight shillings an ell. You\
\ owe money here besides, Sir John, for your diet and by-drinkings, and money\
\ lent you, four-and-twenty pound. Fal. He had his part of it; let him pay. \
\ Host. He? Alas, he is poor; he hath nothing. Fal. How? Poor? Look upon his\
\ face. What call you rich? Let them coin his nose, let them coin his cheeks.\
\ I'll not pay a denier.\n What, will you make a younker of me? Shall I not\
\ take mine ease\n"
- "EDWARD. I wonder how our princely father scap'd,\n Or whether he be scap'd\
\ away or no From Clifford's and Northumberland's pursuit. Had he been ta'en,\
\ we should have heard the news; Had he been slain, we should have heard the\
\ news; Or had he scap'd, methinks we should have heard The happy tidings\
\ of his good escape. How fares my brother? Why is he so sad? RICHARD. I cannot\
\ joy until I be resolv'd Where our right valiant father is become. I saw\
\ him in the battle range about, And watch'd him how he singled Clifford forth.\
\ Methought he bore him in the thickest troop As doth a lion in a herd of\
\ neat;\n Or as a bear, encompass'd round with dogs,\n Who having pinch'd\
\ a few and made them cry, The rest stand all aloof and bark at him. So\
\ far'd our father with his enemies; So fled his enemies my warlike father.\
\ Methinks 'tis prize enough to be his son. See how the morning opes her\
\ golden gates And takes her farewell of the glorious sun. How well resembles\
\ it the prime of youth, Trimm'd like a younker prancing to his love! EDWARD.\
\ Dazzle mine eyes, or do I see three suns? RICHARD. Three glorious suns, each\
\ one a perfect sun; Not separated with the racking clouds, But sever'd\
\ in a pale clear-shining sky. See, see! they join, embrace, and seem to kiss,\
\ As if they vow'd some league inviolable. Now are they but one lamp, one\
\ light, one sun. In this the heaven figures some event. EDWARD. 'Tis wondrous\
\ strange, the like yet never heard of. I think it cites us, brother, to the\
\ field, That we, the sons of brave Plantagenet, Each one already blazing\
\ by our meeds, Should notwithstanding join our lights together And overshine\
\ the earth, as this the world. Whate'er it bodes, henceforward will I bear\
\ Upon my target three fair shining suns. RICHARD. Nay, bear three daughters-\
\ by your leave I speak it, You love the breeder better than the male.\n"
- "Forget that rarest treasure of your cheek,\n Exposing it- but, O, the harder\
\ heart! Alack, no remedy!- to the greedy touch Of common-kissing Titan,\
\ and forget Your laboursome and dainty trims wherein You made great Juno\
\ angry. IMOGEN. Nay, be brief; I see into thy end, and am almost A man\
\ already. PISANIO. First, make yourself but like one. Fore-thinking this,\
\ I have already fit- 'Tis in my cloak-bag- doublet, hat, hose, all That\
\ answer to them. Would you, in their serving, And with what imitation you\
\ can borrow From youth of such a season, fore noble Lucius Present yourself,\
\ desire his service, tell him Wherein you're happy- which will make him know\
\ If that his head have ear in music; doubtless With joy he will embrace\
\ you; for he's honourable, And, doubling that, most holy. Your means abroad-\
\ You have me, rich; and I will never fail Beginning nor supplyment. IMOGEN.\
\ Thou art all the comfort The gods will diet me with. Prithee away! There's\
\ more to be consider'd; but we'll even All that good time will give us. This\
\ attempt I am soldier to, and will abide it with A prince's courage. Away,\
\ I prithee. PISANIO. Well, madam, we must take a short farewell, Lest, being\
\ miss'd, I be suspected of Your carriage from the court. My noble mistress,\
\ Here is a box; I had it from the Queen. What's in't is precious. If you\
\ are sick at sea Or stomach-qualm'd at land, a dram of this\n Will drive\
\ away distemper. To some shade,\n And fit you to your manhood. May the gods\
\ Direct you to the best! IMOGEN. Amen. I thank thee. Exeunt\
\ severally\n"
- source_sentence: The excerpt showcases the emotional turmoil and sense of honor
that drives Brutus to take his own life in the face of defeat.
sentences:
- "Thou know'st that we two went to school together;\n Even for that our love\
\ of old, I prithee, Hold thou my sword-hilts, whilst I run on it. VOLUMNIUS.\
\ That's not an office for a friend, my lord. \
\ Alarum still. CLITUS. Fly, fly, my lord, there is no tarrying\
\ here. BRUTUS. Farewell to you, and you, and you, Volumnius. Strato, thou\
\ hast been all this while asleep; Farewell to thee too, Strato. Countrymen,\
\ My heart doth joy that yet in all my life I found no man but he was true\
\ to me. I shall have glory by this losing day, More than Octavius and Mark\
\ Antony By this vile conquest shall attain unto. So, fare you well at once,\
\ for Brutus' tongue Hath almost ended his life's history. Night hangs upon\
\ mine eyes, my bones would rest That have but labor'd to attain this hour.\
\ Alarum. Cry within, \"Fly, fly, fly!\" CLITUS. Fly,\
\ my lord, fly. BRUTUS. Hence! I will follow. Exeunt Clitus,\
\ Dardanius, and Volumnius. I prithee, Strato, stay thou by thy lord. Thou\
\ art a fellow of a good respect; Thy life hath had some smatch of honor in\
\ it. Hold then my sword, and turn away thy face, While I do run upon it.\
\ Wilt thou, Strato? STRATO. Give me your hand first. Fare you well, my lord.\
\ BRUTUS. Farewell, good Strato. Runs on his sword. Caesar,\
\ now be still; I kill'd not thee with half so good a will. Dies.\n\
\ Alarum. Retreat. Enter Octavius, Antony, Messala,\n Lucilius,\
\ and the Army.\n OCTAVIUS. What man is that?\n"
- "Elsinore. A room in the Castle.\nEnter King, Queen, Polonius, Ophelia, Rosencrantz,\
\ Guildenstern, and Lords. King. And can you by no drift of circumstance\n \
\ Get from him why he puts on this confusion, Grating so harshly all his days\
\ of quiet With turbulent and dangerous lunacy? Ros. He does confess he feels\
\ himself distracted, But from what cause he will by no means speak. Guil.\
\ Nor do we find him forward to be sounded, But with a crafty madness keeps\
\ aloof When we would bring him on to some confession Of his true state.\
\ Queen. Did he receive you well? Ros. Most like a gentleman. Guil. But with\
\ much forcing of his disposition. Ros. Niggard of question, but of our demands\
\ Most free in his reply. Queen. Did you assay him To any pastime? Ros.\
\ Madam, it so fell out that certain players\n We o'erraught on the way.\
\ Of these we told him,\n"
- "VII.\nThe French camp near Agincourt\nEnter the CONSTABLE OF FRANCE, the LORD\
\ RAMBURES, the DUKE OF ORLEANS,\nthe DAUPHIN, with others\n CONSTABLE. Tut!\
\ I have the best armour of the world.\n Would it were day! ORLEANS. You have\
\ an excellent armour; but let my horse have his due. CONSTABLE. It is the\
\ best horse of Europe. ORLEANS. Will it never be morning? DAUPHIN. My Lord\
\ of Orleans and my Lord High Constable, you talk of horse and armour? ORLEANS.\
\ You are as well provided of both as any prince in the world. DAUPHIN. What\
\ a long night is this! I will not change my horse with any that treads but\
\ on four pasterns. Ca, ha! he bounds from the earth as if his entrails were\
\ hairs; le cheval volant, the Pegasus, chez les narines de feu! When I bestride\
\ him I soar, I am a hawk. He trots the air; the earth sings when he touches\
\ it; the basest horn of his hoof is more musical than the pipe of Hermes.\
\ ORLEANS. He's of the colour of the nutmeg. DAUPHIN. And of the heat of the\
\ ginger. It is a beast for Perseus: he is pure air and fire; and the dull\
\ elements of earth and water never appear in him, but only in patient stillness\
\ while his rider mounts him; he is indeed a horse, and all other jades you\
\ may call beasts. CONSTABLE. Indeed, my lord, it is a most absolute and excellent\
\ horse.\n DAUPHIN. It is the prince of palfreys; his neigh is like the\n"
- source_sentence: What themes are present in the excerpt from the play?
sentences:
- "Enter TRAVERS NORTHUMBERLAND. Here comes my servant Travers, whom I sent\n \
\ On Tuesday last to listen after news. LORD BARDOLPH. My lord, I over-rode\
\ him on the way; And he is furnish'd with no certainties More than he haply\
\ may retail from me. NORTHUMBERLAND. Now, Travers, what good tidings comes with\
\ you? TRAVERS. My lord, Sir John Umfrevile turn'd me back With joyful tidings;\
\ and, being better hors'd, Out-rode me. After him came spurring hard A\
\ gentleman, almost forspent with speed, That stopp'd by me to breathe his\
\ bloodied horse. He ask'd the way to Chester; and of him I did demand what\
\ news from Shrewsbury. He told me that rebellion had bad luck, And that\
\ young Harry Percy's spur was cold. With that he gave his able horse the\
\ head And, bending forward, struck his armed heels\n Against the panting\
\ sides of his poor jade\n Up to the rowel-head; and starting so, He seem'd\
\ in running to devour the way, Staying no longer question. NORTHUMBERLAND.\
\ Ha! Again: Said he young Harry Percy's spur was cold? Of Hotspur, Coldspur?\
\ that rebellion Had met ill luck? LORD BARDOLPH. My lord, I'll tell you what:\
\ If my young lord your son have not the day, Upon mine honour, for a silken\
\ point I'll give my barony. Never talk of it. NORTHUMBERLAND. Why should\
\ that gentleman that rode by Travers Give then such instances of loss? LORD\
\ BARDOLPH. Who- he? He was some hilding fellow that had stol'n The horse\
\ he rode on and, upon my life, Spoke at a venture. Look, here comes more news.\
\ \n Enter Morton NORTHUMBERLAND. Yea, this man's brow,\
\ like to a title-leaf,\n"
- "ANTONY. Yet they are not join'd. Where yond pine does stand\n I shall discover\
\ all. I'll bring thee word Straight how 'tis like to go. \
\ Exit SCARUS. Swallows have built In Cleopatra's sails their nests.\
\ The augurers Say they know not, they cannot tell; look grimly, And dare\
\ not speak their knowledge. Antony Is valiant and dejected; and by starts\
\ His fretted fortunes give him hope and fear Of what he has and has not.\
\ [Alarum afar off, as at a sea-fight]\n \
\ Re-enter ANTONY ANTONY. All is lost!\n This foul Egyptian hath\
\ betrayed me. My fleet hath yielded to the foe, and yonder They cast\
\ their caps up and carouse together Like friends long lost. Triple-turn'd\
\ whore! 'tis thou\n Hast sold me to this novice; and my heart\n Makes\
\ only wars on thee. Bid them all fly; For when I am reveng'd upon my charm,\
\ I have done all. Bid them all fly; begone. Exit SCARUS O sun, thy\
\ uprise shall I see no more! Fortune and Antony part here; even here Do\
\ we shake hands. All come to this? The hearts That spaniel'd me at heels,\
\ to whom I gave Their wishes, do discandy, melt their sweets On blossoming\
\ Caesar; and this pine is bark'd That overtopp'd them all. Betray'd I am.\
\ O this false soul of Egypt! this grave charm- Whose eye beck'd forth my\
\ wars and call'd them home, Whose bosom was my crownet, my chief end- Like\
\ a right gypsy hath at fast and loose Beguil'd me to the very heart of loss.\
\ What, Eros, Eros! Enter CLEOPATRA\n Ah, thou spell!\
\ Avaunt!\n"
- "TALBOT. Saint George and victory! Fight, soldiers, fight.\n The Regent hath\
\ with Talbot broke his word And left us to the rage of France his sword. \
\ Where is John Talbot? Pause and take thy breath; I gave thee life and rescu'd\
\ thee from death. JOHN. O, twice my father, twice am I thy son! The life\
\ thou gav'st me first was lost and done Till with thy warlike sword, despite\
\ of fate, To my determin'd time thou gav'st new date. TALBOT. When from the\
\ Dauphin's crest thy sword struck fire, It warm'd thy father's heart with\
\ proud desire Of bold-fac'd victory. Then leaden age, Quicken'd with youthful\
\ spleen and warlike rage, Beat down Alencon, Orleans, Burgundy, And from\
\ the pride of Gallia rescued thee. The ireful bastard Orleans, that drew blood\
\ From thee, my boy, and had the maidenhood Of thy first fight, I soon encountered\
\ And, interchanging blows, I quickly shed Some of his bastard blood; and\
\ in disgrace\n Bespoke him thus: 'Contaminated, base,\n"
- source_sentence: What is the significance of the tennis balls in the excerpt from
the play?
sentences:
- "My fault is past. But, O, what form of prayer\n Can serve my turn? 'Forgive\
\ me my foul murther'? That cannot be; since I am still possess'd Of those\
\ effects for which I did the murther- My crown, mine own ambition, and my\
\ queen. May one be pardon'd and retain th' offence? In the corrupted currents\
\ of this world Offence's gilded hand may shove by justice, And oft 'tis\
\ seen the wicked prize itself Buys out the law; but 'tis not so above. \
\ There is no shuffling; there the action lies In his true nature, and we ourselves\
\ compell'd, Even to the teeth and forehead of our faults, To give in evidence.\
\ What then? What rests? Try what repentance can. What can it not? Yet what\
\ can it when one cannot repent? O wretched state! O bosom black as death!\
\ O limed soul, that, struggling to be free, Art more engag'd! Help, angels!\
\ Make assay. Bow, stubborn knees; and heart with strings of steel, Be\
\ soft as sinews of the new-born babe! All may be well. \
\ He kneels.\n Enter Hamlet. Ham. Now might\
\ I do it pat, now he is praying;\n And now I'll do't. And so he goes to heaven,\
\ And so am I reveng'd. That would be scann'd. A villain kills my father;\
\ and for that, I, his sole son, do this same villain send To heaven. \
\ Why, this is hire and salary, not revenge! He took my father grossly, full\
\ of bread, With all his crimes broad blown, as flush as May; And how his\
\ audit stands, who knows save heaven?\n But in our circumstance and course\
\ of thought,\n"
- "YORK. From Ireland thus comes York to claim his right\n And pluck the crown\
\ from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright,\
\ To entertain great England's lawful king. Ah, sancta majestas! who would\
\ not buy thee dear? Let them obey that knows not how to rule; This hand\
\ was made to handle nought but gold. I cannot give due action to my words\
\ Except a sword or sceptre balance it.\n A sceptre shall it have, have\
\ I a soul\n On which I'll toss the flower-de-luce of France.\n \
\ Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb\
\ me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York,\
\ if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept\
\ thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger\
\ from Henry, our dread liege, To know the reason of these arms in peace; \
\ Or why thou, being a subject as I am, Against thy oath and true allegiance\
\ sworn, Should raise so great a power without his leave, Or dare to bring\
\ thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is\
\ so great. O, I could hew up rocks and fight with flint, I am so angry\
\ at these abject terms; And now, like Ajax Telamonius, On sheep or oxen\
\ could I spend my fury. I am far better born than is the King, More like\
\ a king, more kingly in my thoughts; But I must make fair weather yet awhile,\
\ Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon\
\ me That I have given no answer all this while; My mind was troubled with\
\ deep melancholy. The cause why I have brought this army hither Is to\
\ remove proud Somerset from the King, Seditious to his Grace and to the state.\
\ BUCKINGHAM. That is too much presumption on thy part; But if thy arms be\
\ to no other end, The King hath yielded unto thy demand:\n The Duke of\
\ Somerset is in the Tower.\n"
- "Says that you savour too much of your youth,\n And bids you be advis'd there's\
\ nought in France That can be with a nimble galliard won; You cannot revel\
\ into dukedoms there. He therefore sends you, meeter for your spirit, This\
\ tun of treasure; and, in lieu of this, Desires you let the dukedoms that\
\ you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What\
\ treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the\
\ Dauphin is so pleasant with us; His present and your pains we thank you for.\
\ When we have match'd our rackets to these balls, We will in France,\
\ by God's grace, play a set Shall strike his father's crown into the hazard.\
\ Tell him he hath made a match with such a wrangler That all the courts\
\ of France will be disturb'd With chaces. And we understand him well, How\
\ he comes o'er us with our wilder days, Not measuring what use we made of\
\ them. We never valu'd this poor seat of England; And therefore, living\
\ hence, did give ourself To barbarous licence; as 'tis ever common That\
\ men are merriest when they are from home. But tell the Dauphin I will keep\
\ my state, Be like a king, and show my sail of greatness, When I do rouse\
\ me in my throne of France; For that I have laid by my majesty And plodded\
\ like a man for working-days; But I will rise there with so full a glory \
\ That I will dazzle all the eyes of France, Yea, strike the Dauphin blind\
\ to look on us. And tell the pleasant Prince this mock of his Hath turn'd\
\ his balls to gun-stones, and his soul Shall stand sore charged for the wasteful\
\ vengeance\n That shall fly with them; for many a thousand widows\n"
model-index:
- name: RAG_general/rerank/models/intfloat-multilingual-e5-small-ft
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: multi dev
type: multi-dev
metrics:
- type: cosine_accuracy@3
value: 0.5091225021720244
name: Cosine Accuracy@3
- type: cosine_precision@1
value: 0.39009556907037357
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1697075007240081
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10990443092962641
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.060165073848827105
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.39009556907037357
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5091225021720244
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.549522154648132
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6016507384882711
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.49395277764966705
name: Cosine Ndcg@10
- type: cosine_mrr@200
value: 0.46542201164534946
name: Cosine Mrr@200
- type: cosine_map@100
value: 0.4650878782295526
name: Cosine Map@100
- type: dot_accuracy@3
value: 0.5091225021720244
name: Dot Accuracy@3
- type: dot_precision@1
value: 0.39009556907037357
name: Dot Precision@1
- type: dot_precision@3
value: 0.1697075007240081
name: Dot Precision@3
- type: dot_precision@5
value: 0.10990443092962641
name: Dot Precision@5
- type: dot_precision@10
value: 0.060165073848827105
name: Dot Precision@10
- type: dot_recall@1
value: 0.39009556907037357
name: Dot Recall@1
- type: dot_recall@3
value: 0.5091225021720244
name: Dot Recall@3
- type: dot_recall@5
value: 0.549522154648132
name: Dot Recall@5
- type: dot_recall@10
value: 0.6016507384882711
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.49395277764966705
name: Dot Ndcg@10
- type: dot_mrr@200
value: 0.46542201164534946
name: Dot Mrr@200
- type: dot_map@100
value: 0.4650878782295526
name: Dot Map@100
---
# RAG_general/rerank/models/intfloat-multilingual-e5-small-ft
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision fd1525a9fd15316a2d503bf26ab031a61d056e98 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/intfloat-multilingual-e5-small-fine-tuned")
# Run inference
sentences = [
'What is the significance of the tennis balls in the excerpt from the play?',
"Says that you savour too much of your youth,\n And bids you be advis'd there's nought in France That can be with a nimble galliard won; You cannot revel into dukedoms there. He therefore sends you, meeter for your spirit, This tun of treasure; and, in lieu of this, Desires you let the dukedoms that you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the Dauphin is so pleasant with us; His present and your pains we thank you for. When we have match'd our rackets to these balls, We will in France, by God's grace, play a set Shall strike his father's crown into the hazard. Tell him he hath made a match with such a wrangler That all the courts of France will be disturb'd With chaces. And we understand him well, How he comes o'er us with our wilder days, Not measuring what use we made of them. We never valu'd this poor seat of England; And therefore, living hence, did give ourself To barbarous licence; as 'tis ever common That men are merriest when they are from home. But tell the Dauphin I will keep my state, Be like a king, and show my sail of greatness, When I do rouse me in my throne of France; For that I have laid by my majesty And plodded like a man for working-days; But I will rise there with so full a glory That I will dazzle all the eyes of France, Yea, strike the Dauphin blind to look on us. And tell the pleasant Prince this mock of his Hath turn'd his balls to gun-stones, and his soul Shall stand sore charged for the wasteful vengeance\n That shall fly with them; for many a thousand widows\n",
"YORK. From Ireland thus comes York to claim his right\n And pluck the crown from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright, To entertain great England's lawful king. Ah, sancta majestas! who would not buy thee dear? Let them obey that knows not how to rule; This hand was made to handle nought but gold. I cannot give due action to my words Except a sword or sceptre balance it.\n A sceptre shall it have, have I a soul\n On which I'll toss the flower-de-luce of France.\n Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York, if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger from Henry, our dread liege, To know the reason of these arms in peace; Or why thou, being a subject as I am, Against thy oath and true allegiance sworn, Should raise so great a power without his leave, Or dare to bring thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is so great. O, I could hew up rocks and fight with flint, I am so angry at these abject terms; And now, like Ajax Telamonius, On sheep or oxen could I spend my fury. I am far better born than is the King, More like a king, more kingly in my thoughts; But I must make fair weather yet awhile, Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon me That I have given no answer all this while; My mind was troubled with deep melancholy. The cause why I have brought this army hither Is to remove proud Somerset from the King, Seditious to his Grace and to the state. BUCKINGHAM. That is too much presumption on thy part; But if thy arms be to no other end, The King hath yielded unto thy demand:\n The Duke of Somerset is in the Tower.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `multi-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@3 | 0.5091 |
| cosine_precision@1 | 0.3901 |
| cosine_precision@3 | 0.1697 |
| cosine_precision@5 | 0.1099 |
| cosine_precision@10 | 0.0602 |
| cosine_recall@1 | 0.3901 |
| cosine_recall@3 | 0.5091 |
| cosine_recall@5 | 0.5495 |
| cosine_recall@10 | 0.6017 |
| cosine_ndcg@10 | 0.494 |
| cosine_mrr@200 | 0.4654 |
| **cosine_map@100** | **0.4651** |
| dot_accuracy@3 | 0.5091 |
| dot_precision@1 | 0.3901 |
| dot_precision@3 | 0.1697 |
| dot_precision@5 | 0.1099 |
| dot_precision@10 | 0.0602 |
| dot_recall@1 | 0.3901 |
| dot_recall@3 | 0.5091 |
| dot_recall@5 | 0.5495 |
| dot_recall@10 | 0.6017 |
| dot_ndcg@10 | 0.494 |
| dot_mrr@200 | 0.4654 |
| dot_map@100 | 0.4651 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,359 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 25.61 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 390.39 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is the general being described in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>What is the main conflict highlighted in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>The excerpt showcases the tension between Antony's loyalty to Cleopatra and his obligations to Caesar, as well as Cleopatra's influence over him.</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,302 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 25.55 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 395.63 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The excerpt highlights the tension between Antony's loyalty to Cleopatra and his standing in Rome, showcasing the intricate balance of power and love in the play.</code> | <code>When shrill-tongu'd Fulvia scolds. The messengers!<br> ANTONY. Let Rome in Tiber melt, and the wide arch Of the rang'd empire fall! Here is my space. Kingdoms are clay; our dungy earth alike Feeds beast as man. The nobleness of life Is to do thus [emhracing], when such a mutual pair And such a twain can do't, in which I bind, On pain of punishment, the world to weet We stand up peerless. CLEOPATRA. Excellent falsehood! Why did he marry Fulvia, and not love her? I'll seem the fool I am not. Antony Will be himself. ANTONY. But stirr'd by Cleopatra. Now for the love of Love and her soft hours, Let's not confound the time with conference harsh; There's not a minute of our lives should stretch Without some pleasure now. What sport to-night? CLEOPATRA. Hear the ambassadors. ANTONY. Fie, wrangling queen! Whom everything becomes- to chide, to laugh, To weep; whose every passion fully strives To make itself in thee fair and admir'd. No messenger but thine, and all alone To-night we'll wander through the streets and note The qualities of people. Come, my queen; Last night you did desire it. Speak not to us. Exeunt ANTONY and CLEOPATRA, with the train DEMETRIUS. Is Caesar with Antonius priz'd so slight? PHILO. Sir, sometimes when he is not Antony, He comes too short of that great property Which still should go with Antony. DEMETRIUS. I am full sorry That he approves the common liar, who Thus speaks of him at Rome; but I will hope<br> Of better deeds to-morrow. Rest you happy! Exeunt<br></code> |
| <code>What is the significance of the soothsayer in the context of the play?</code> | <code>CHARMIAN. Lord Alexas, sweet Alexas, most anything Alexas, almost<br> most absolute Alexas, where's the soothsayer that you prais'd so to th' Queen? O that I knew this husband, which you say must charge his horns with garlands! ALEXAS. Soothsayer! SOOTHSAYER. Your will? CHARMIAN. Is this the man? Is't you, sir, that know things? SOOTHSAYER. In nature's infinite book of secrecy A little I can read. ALEXAS. Show him your hand.<br> Enter ENOBARBUS ENOBARBUS. Bring in the banquet quickly; wine enough<br> Cleopatra's health to drink. CHARMIAN. Good, sir, give me good fortune. SOOTHSAYER. I make not, but foresee. CHARMIAN. Pray, then, foresee me one. SOOTHSAYER. You shall be yet far fairer than you are. CHARMIAN. He means in flesh. IRAS. No, you shall paint when you are old. CHARMIAN. Wrinkles forbid! ALEXAS. Vex not his prescience; be attentive. CHARMIAN. Hush!<br> SOOTHSAYER. You shall be more beloving than beloved.<br></code> |
| <code>What is the setting of the scene in which the excerpt takes place?</code> | <code>sweet Isis, I beseech thee! And let her die too, and give him a<br> worse! And let worse follow worse, till the worst of all follow him laughing to his grave, fiftyfold a cuckold! Good Isis, hear me this prayer, though thou deny me a matter of more weight; good Isis, I beseech thee! IRAS. Amen. Dear goddess, hear that prayer of the people! For, as it is a heartbreaking to see a handsome man loose-wiv'd, so it is a deadly sorrow to behold a foul knave uncuckolded. Therefore, dear Isis, keep decorum, and fortune him accordingly! CHARMIAN. Amen. ALEXAS. Lo now, if it lay in their hands to make me a cuckold, they would make themselves whores but they'ld do't!<br> Enter CLEOPATRA ENOBARBUS. Hush! Here comes Antony.<br> CHARMIAN. Not he; the Queen. CLEOPATRA. Saw you my lord? ENOBARBUS. No, lady. CLEOPATRA. Was he not here? CHARMIAN. No, madam. CLEOPATRA. He was dispos'd to mirth; but on the sudden A Roman thought hath struck him. Enobarbus! ENOBARBUS. Madam? CLEOPATRA. Seek him, and bring him hither. Where's Alexas? ALEXAS. Here, at your service. My lord approaches.<br> Enter ANTONY, with a MESSENGER and attendants CLEOPATRA. We will not look upon him. Go with us.<br> Exeunt CLEOPATRA, ENOBARBUS, and the rest MESSENGER. Fulvia thy wife first came into the field. ANTONY. Against my brother Lucius? MESSENGER. Ay.<br> But soon that war had end, and the time's state<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `num_train_epochs`: 7
- `warmup_steps`: 50
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 7
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | multi-dev_cosine_map@100 |
|:-------:|:--------:|:-------------:|:----------:|:------------------------:|
| 1.0 | 162 | - | 1.7998 | 0.4106 |
| 2.0 | 324 | - | 1.6831 | 0.4286 |
| 3.0 | 486 | - | 1.6670 | 0.4343 |
| 3.0864 | 500 | 1.7796 | - | - |
| 4.0 | 648 | - | 1.6174 | 0.4501 |
| 5.0 | 810 | - | 1.5971 | 0.4559 |
| 6.0 | 972 | - | 1.5842 | 0.4620 |
| 6.1728 | 1000 | 1.0289 | - | - |
| **7.0** | **1134** | **-** | **1.5726** | **0.4651** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
mogaio/pr_ebsa_fr_tran_merged25_e5_end_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T18:25:53 | 2023-12-15T18:27:02 | 54 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.923784494086728
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9251101321585903
recall: 0.8898305084745762
f1-score: 0.9071274298056154
support: 236
'1':
precision: 0.9081967213114754
recall: 0.920265780730897
f1-score: 0.9141914191419142
support: 301
'2':
precision: 0.9432314410480349
recall: 0.9642857142857143
f1-score: 0.9536423841059601
support: 224
accuracy: 0.923784494086728
macro avg:
precision: 0.9255127648393668
recall: 0.9247940011637291
f1-score: 0.9249870776844965
support: 761
weighted avg:
precision: 0.9237543325873079
recall: 0.923784494086728
f1-score: 0.9236131204146865
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9238 | {'0': {'precision': 0.9251101321585903, 'recall': 0.8898305084745762, 'f1-score': 0.9071274298056154, 'support': 236}, '1': {'precision': 0.9081967213114754, 'recall': 0.920265780730897, 'f1-score': 0.9141914191419142, 'support': 301}, '2': {'precision': 0.9432314410480349, 'recall': 0.9642857142857143, 'f1-score': 0.9536423841059601, 'support': 224}, 'accuracy': 0.923784494086728, 'macro avg': {'precision': 0.9255127648393668, 'recall': 0.9247940011637291, 'f1-score': 0.9249870776844965, 'support': 761}, 'weighted avg': {'precision': 0.9237543325873079, 'recall': 0.923784494086728, 'f1-score': 0.9236131204146865, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e5_end_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
saqlainshah/gemma_2b_finetuned_medal | saqlainshah | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"gemma fine tuned",
"medical",
"medal dataset finetuned",
"question answering",
"QA",
"conversational",
"en",
"dataset:medal",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-26T07:57:16 | 2024-02-26T08:28:59 | 54 | 0 | ---
datasets:
- medal
language:
- en
library_name: transformers
tags:
- gemma fine tuned
- medical
- medal dataset finetuned
- question answering
- QA
---
# Model Card for Model ID
This model is based on google/gemma-2b-it and trained on small chunk of data from medal dataset.
Trained on Colab TPU
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
Small chunk from Medal dataset
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | [
"QUESTION_ANSWERING"
] | [
"MEDAL"
] |
RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2306.05685",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-23T14:18:58 | 2024-08-23T17:25:06 | 54 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3-8b-cpt-sea-lionv2.1-instruct - GGUF
- Model creator: https://huggingface.co/aisingapore/
- Original model: https://huggingface.co/aisingapore/llama3-8b-cpt-sea-lionv2.1-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q2_K.gguf) | Q2_K | 2.96GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_XS.gguf) | IQ3_XS | 3.28GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_S.gguf) | IQ3_S | 3.43GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_S.gguf) | Q3_K_S | 3.41GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.IQ3_M.gguf) | IQ3_M | 3.52GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K.gguf) | Q3_K | 3.74GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_M.gguf) | Q3_K_M | 3.74GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q3_K_L.gguf) | Q3_K_L | 4.03GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.IQ4_XS.gguf) | IQ4_XS | 4.18GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q4_0.gguf) | Q4_0 | 4.34GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.IQ4_NL.gguf) | IQ4_NL | 4.38GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K_S.gguf) | Q4_K_S | 4.37GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K.gguf) | Q4_K | 4.58GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q4_K_M.gguf) | Q4_K_M | 4.58GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q4_1.gguf) | Q4_1 | 4.78GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q5_0.gguf) | Q5_0 | 5.21GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K_S.gguf) | Q5_K_S | 5.21GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K.gguf) | Q5_K | 5.34GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q5_K_M.gguf) | Q5_K_M | 5.34GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q5_1.gguf) | Q5_1 | 5.65GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q6_K.gguf) | Q6_K | 6.14GB |
| [llama3-8b-cpt-sea-lionv2.1-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/aisingapore_-_llama3-8b-cpt-sea-lionv2.1-instruct-gguf/blob/main/llama3-8b-cpt-sea-lionv2.1-instruct.Q8_0.gguf) | Q8_0 | 7.95GB |
Original model description:
---
language:
- en
- id
- ta
- th
- vi
license: llama3
---
# Llama3 8B CPT SEA-Lionv2.1 Instruct
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3 8B CPT SEA-Lionv2.1 Instruct is a multilingual model which has been fine-tuned with around **100,000 English instruction-completion pairs** alongside a smaller pool of around **50,000 instruction-completion pairs** from other ASEAN languages, such as Indonesian, Thai and Vietnamese.
These instructions have been carefully curated and rewritten to ensure the model was trained on truly open, commercially permissive and high quality datasets.
Llama3 8B CPT SEA-Lionv2.1 Instruct has undergone additional supervised fine-tuning and alignment compared to the now deprecated Llama3 8B CPT SEA-Lionv2 Instruct. These improvements have increased the model's capabilities in chat interactions and its ability to follow instructions accurately.
SEA-LION stands for _Southeast Asian Languages In One Network_.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Indonesian, Thai, Vietnamese, Tamil
- **License:** [Llama3 Community License](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE)
## Model Details
### Model Description
We performed instruction tuning in English and also in ASEAN languages such as Indonesian, Thai and Vietnamese on our [continued pre-trained Llama3 CPT 8B SEA-Lionv2](https://huggingface.co/aisingapore/llama3-8b-cpt-SEA-Lionv2-base), a decoder model using the Llama3 architecture, to create Llama3 8B SEA-Lionv2.1 Instruct.
The model has a context length of 8192.
### Benchmark Performance
We evaluated Llama3 8B SEA-Lionv2.1 Instruct on both general language capabilities and instruction-following capabilities.
#### General Language Capabilities
For the evaluation of general language capabilities, we employed the [BHASA evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarization (Summ), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: BHASA is implemented following a strict answer format, and only spaces and punctuations are cleaned. For tasks where options are provided, the answer should only include one of the pre-defined options, nothing else. If the model continues to generate more tokens (e.g. to explain its answer), it will be considered to be a wrong response. For the F1 score metric (as used in Sentiment Analysis and Toxicity Detection), all answers that do not fall under the pre-defined labels will be treated as a separate label (to mark it as a wrong answer) and included in the calculations so that the model is penalized for not generating one of the pre-defined labels.
The evaluation was done zero-shot with native prompts and only a sample of 100-1000 instances for each dataset was used as per the setting described in the paper.
#### Instruction-following Capabilities
Since LLama3 8B SEA-Lionv2.1 is an instruction-following model, we also evaluated it on instruction-following capabilities with two datasets, [IFEval](https://arxiv.org/abs/2311.07911) and [MT-Bench](https://arxiv.org/abs/2306.05685).
As these two datasets were originally in English, the linguists and native speakers in the team worked together to filter, localize and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
**IFEval**
IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. The metric used is accuracy normalized by language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
**MT-Bench**
MT-Bench evaluates a model's ability to engage in multi-turn (2 turns) conversations and respond in ways that align with human needs. We use `gpt-4-1106-preview` as the judge model and compare against `gpt-3.5-turbo-0125` as the baseline model. The metric used is the weighted win rate against the baseline model (i.e. average win rate across each category (Math, Reasoning, STEM, Humanities, Roleplay, Writing, Extraction)). A tie is given a score of 0.5.
For more details on Llama3 8B CPT SEA-Lionv2.1 Instruct benchmark performance, please refer to the SEA HELM leaderboard, https://leaderboard.sea-lion.ai/
### Usage
SEA-LION can be run using the 🤗 Transformers library
```python
# Please use transformers==4.43.2
import transformers
import torch
model_id = "aisingapore/llama3-8b-cpt-SEA-Lionv2.1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Accessing Older Revisions
Huggingface provides support for the revision parameter, allowing users to access specific versions of models. This can be used to retrieve the original llama3-8b-cpt-SEA-Lionv2-instruct model with the tag "v2.0.0".
```python
# Please use transformers==4.43.2
import transformers
import torch
model_id = "aisingapore/llama3-8b-cpt-SEA-Lionv2.1-instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16, "revision": "v2.0.0"},
device_map="auto",
)
messages = [
{"role": "user", "content": "Apa sentimen dari kalimat berikut ini?\nKalimat: Buku ini sangat membosankan.\nJawaban: "},
]
outputs = pipeline(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
```
### Caveats
It is important for users to be aware that our model exhibits certain limitations that warrant consideration. Like many LLMs, the model can hallucinate and occasionally generates irrelevant content, introducing fictional elements that are not grounded in the provided context. Users should also exercise caution in interpreting and validating the model's responses due to the potential inconsistencies in its reasoning.
## Limitations
### Safety
Current SEA-LION models, including this commercially permissive release, have not been aligned for safety. Developers and users should perform their own safety fine-tuning and related security measures. In no event shall the authors be held liable for any claim, damages, or other liability arising from the use of the released weights and codes.
## Technical Specifications
### Fine-Tuning Details
The Llama3 8B CPT SEA-Lionv2.1 Instruct was fine-tuned using 8x A100-40GB using parameter efficient fine tuning in the form of LoRA.
## Data
Llama3 8B CPT SEA-Lionv2.1 Instruct was trained on a wide range of instructions that were manually and stringently verified by our team. A large portion of the effort was dedicated to ensuring that each instruction-completion pair that the model sees is of high quality and any errors were corrected and rewritten by native speakers or else dropped from our mix.
In addition, special care was taken to ensure that the datasets used had commercially permissive licenses through verification with the original data source.
Link to dataset: _coming soon_
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Choa Esther<br>
Cheng Nicholas<br>
Huang Yuli<br>
Lau Wayne<br>
Lee Chwan Ren<br>
Leong Wai Yi<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Brandon<br>
Ong Tat-Wee David<br>
Ong Zhi Hao<br>
Rengarajan Hamsawardhini<br>
Siow Bryan<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Teng Walter<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
knowledgator/gliner-qwen-1.5B-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"GLiNER",
"information extraction",
"encoder",
"entity recognition",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"dataset:EmergentMethods/AskNews-NER-v0",
"license:apache-2.0",
"region:us"
] | 2024-08-30T18:42:32 | 2024-09-10T09:01:14 | 54 | 4 | ---
datasets:
- urchade/pile-mistral-v0.1
- knowledgator/GLINER-multi-task-synthetic-data
- EmergentMethods/AskNews-NER-v0
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
The initial versions of GLiNER relied on older encoder architectures like BERT and DeBERTA. These models, however, were trained on smaller datasets and lacked support for modern optimization techniques such as flash attention. Additionally, their context window was typically limited to 512 tokens, which is insufficient for many practical applications. Recognizing these limitations, we began exploring alternative backbones for GLiNER.
This latest model leverages the LLM2Vec approach, transforming the initial decoder model into a bidirectional encoder. We further enhanced the model by pre-training it on the masked token prediction task using the Wikipedia corpus. This approach introduces several advancements for GLiNER, including support for flash attention, an extended context window, and faster inference times. Additionally, by utilizing modern decoders trained on large, up-to-date datasets, the model exhibits improved generalization and performance.
Key Advantages Over Previous GLiNER Models:
* Enhanced performance and generalization capabilities
* Support for Flash Attention
* Extended context window (up to 32k tokens)
While these models are larger and require more computational resources compared to older encoders, they are still considered relatively small given current standards and provide significant benefits for a wide range of use cases.
### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
And LLM2Vec package:
```bash
pip install llm2vec
```
To use this particular Qwen-based model you need different `transformers` package version than llm2vec requires, so install it manually:
```bash
pip install transformers==4.44.1
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-qwen-1.5B-v1.0")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels, threshold=0.5)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
If you want to use flash attention or increase sequence length, please, check the following code:
```python
from gliner import GLiNER
import torch
model = GLiNER.from_pretrained("knowledgator/gliner-qwen-1.5B-v1.0",
_attn_implementation = 'flash_attention_2',
max_length = 2048).to('cuda:0', dtype=torch.float16)
```
### Benchmarks
Below you can see the table with benchmarking results on various named entity recognition datasets:
| Dataset | Score |
|-----------------------------|--------|
| ACE 2004 | 29.8% |
| ACE 2005 | 26.8% |
| AnatEM | 43.7% |
| Broad Tweet Corpus | 68.3% |
| CoNLL 2003 | 67.5% |
| FabNER | 24.9% |
| FindVehicle | 33.2% |
| GENIA_NER | 58.8% |
| HarveyNER | 19.5% |
| MultiNERD | 65.1% |
| Ontonotes | 39.9% |
| PolyglotNER | 45.8% |
| TweetNER7 | 37.0% |
| WikiANN en | 56.0% |
| WikiNeural | 78.3% |
| bc2gm | 58.1% |
| bc4chemd | 65.7% |
| bc5cdr | 72.3% |
| ncbi | 63.3% |
| **Average** | **50.2%** |
| | |
| CrossNER_AI | 58.3% |
| CrossNER_literature | 64.4% |
| CrossNER_music | 71.5% |
| CrossNER_politics | 70.5% |
| CrossNER_science | 65.1% |
| mit-movie | 47.5% |
| mit-restaurant | 33.1% |
| **Average (zero-shot benchmark)** | **58.6%** |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG). | [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BC5CDR"
] |
RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-01T15:21:26 | 2024-11-01T16:04:43 | 54 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b-deduped - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b-deduped/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-2.8b-deduped.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q2_K.gguf) | Q2_K | 1.01GB |
| [pythia-2.8b-deduped.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [pythia-2.8b-deduped.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q3_K.gguf) | Q3_K | 1.38GB |
| [pythia-2.8b-deduped.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [pythia-2.8b-deduped.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [pythia-2.8b-deduped.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [pythia-2.8b-deduped.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q4_0.gguf) | Q4_0 | 1.49GB |
| [pythia-2.8b-deduped.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [pythia-2.8b-deduped.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [pythia-2.8b-deduped.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q4_K.gguf) | Q4_K | 1.66GB |
| [pythia-2.8b-deduped.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [pythia-2.8b-deduped.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q4_1.gguf) | Q4_1 | 1.64GB |
| [pythia-2.8b-deduped.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q5_0.gguf) | Q5_0 | 1.8GB |
| [pythia-2.8b-deduped.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [pythia-2.8b-deduped.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q5_K.gguf) | Q5_K | 1.93GB |
| [pythia-2.8b-deduped.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [pythia-2.8b-deduped.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q5_1.gguf) | Q5_1 | 1.95GB |
| [pythia-2.8b-deduped.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q6_K.gguf) | Q6_K | 2.13GB |
| [pythia-2.8b-deduped.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-deduped-gguf/blob/main/pythia-2.8b-deduped.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/the_pile_deduplicated
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B-deduped
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B-deduped for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B-deduped as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B-deduped has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B-deduped will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B-deduped to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B-deduped may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B-deduped.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
Pythia-2.8B-deduped was trained on the Pile **after the dataset has been globally
deduplicated**.<br>
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
RichardErkhov/EleutherAI_-_pythia-2.8b-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"endpoints_compatible",
"region:us"
] | 2024-11-01T15:21:27 | 2024-11-01T16:04:50 | 53 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-2.8b - GGUF
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/EleutherAI/pythia-2.8b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [pythia-2.8b.Q2_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q2_K.gguf) | Q2_K | 1.01GB |
| [pythia-2.8b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q3_K_S.gguf) | Q3_K_S | 1.16GB |
| [pythia-2.8b.Q3_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q3_K.gguf) | Q3_K | 1.38GB |
| [pythia-2.8b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q3_K_M.gguf) | Q3_K_M | 1.38GB |
| [pythia-2.8b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q3_K_L.gguf) | Q3_K_L | 1.49GB |
| [pythia-2.8b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.IQ4_XS.gguf) | IQ4_XS | 1.43GB |
| [pythia-2.8b.Q4_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q4_0.gguf) | Q4_0 | 1.49GB |
| [pythia-2.8b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.IQ4_NL.gguf) | IQ4_NL | 1.5GB |
| [pythia-2.8b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q4_K_S.gguf) | Q4_K_S | 1.5GB |
| [pythia-2.8b.Q4_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q4_K.gguf) | Q4_K | 1.66GB |
| [pythia-2.8b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q4_K_M.gguf) | Q4_K_M | 1.66GB |
| [pythia-2.8b.Q4_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q4_1.gguf) | Q4_1 | 1.64GB |
| [pythia-2.8b.Q5_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q5_0.gguf) | Q5_0 | 1.8GB |
| [pythia-2.8b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q5_K_S.gguf) | Q5_K_S | 1.8GB |
| [pythia-2.8b.Q5_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q5_K.gguf) | Q5_K | 1.93GB |
| [pythia-2.8b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q5_K_M.gguf) | Q5_K_M | 1.93GB |
| [pythia-2.8b.Q5_1.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q5_1.gguf) | Q5_1 | 1.95GB |
| [pythia-2.8b.Q6_K.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q6_K.gguf) | Q6_K | 2.13GB |
| [pythia-2.8b.Q8_0.gguf](https://huggingface.co/RichardErkhov/EleutherAI_-_pythia-2.8b-gguf/blob/main/pythia-2.8b.Q8_0.gguf) | Q8_0 | 2.75GB |
Original model description:
---
language:
- en
tags:
- pytorch
- causal-lm
- pythia
license: apache-2.0
datasets:
- EleutherAI/pile
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure>
| [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF | redshiva | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-09-03T21:03:46 | 2024-09-03T21:04:22 | 52 | 1 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo redshiva/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Abhikhade/stella_en_400M_v5_aquabotica | Abhikhade | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"feature-extraction",
"mteb",
"transformers",
"sentence-similarity",
"custom_code",
"arxiv:2205.13147",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-03-05T10:11:33 | 2025-03-05T10:47:00 | 52 | 0 | ---
license: mit
tags:
- mteb
- sentence-transformers
- transformers
- sentence-similarity
model-index:
- name: stella_en_400M_v5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 92.35820895522387
- type: ap
value: 70.81322736988783
- type: ap_weighted
value: 70.81322736988783
- type: f1
value: 88.9505466159595
- type: f1_weighted
value: 92.68630932872613
- type: main_score
value: 92.35820895522387
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.1945
- type: ap
value: 96.08192192244094
- type: ap_weighted
value: 96.08192192244094
- type: f1
value: 97.1936887167346
- type: f1_weighted
value: 97.1936887167346
- type: main_score
value: 97.1945
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 59.528000000000006
- type: f1
value: 59.21016819840188
- type: f1_weighted
value: 59.21016819840188
- type: main_score
value: 59.528000000000006
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 64.24
- type: map_at_1
value: 40.398
- type: map_at_10
value: 56.215
- type: map_at_100
value: 56.833999999999996
- type: map_at_1000
value: 56.835
- type: map_at_20
value: 56.747
- type: map_at_3
value: 52.181
- type: map_at_5
value: 54.628
- type: mrr_at_1
value: 41.25177809388336
- type: mrr_at_10
value: 56.570762491815216
- type: mrr_at_100
value: 57.17548614361504
- type: mrr_at_1000
value: 57.176650626377466
- type: mrr_at_20
value: 57.08916253512566
- type: mrr_at_3
value: 52.47747747747754
- type: mrr_at_5
value: 54.94547178757718
- type: nauc_map_at_1000_diff1
value: 22.408086887100158
- type: nauc_map_at_1000_max
value: -8.730419096847543
- type: nauc_map_at_1000_std
value: -17.789262741255737
- type: nauc_map_at_100_diff1
value: 22.407371684274025
- type: nauc_map_at_100_max
value: -8.732263549026266
- type: nauc_map_at_100_std
value: -17.79550515579994
- type: nauc_map_at_10_diff1
value: 21.925005073301246
- type: nauc_map_at_10_max
value: -8.990323944492134
- type: nauc_map_at_10_std
value: -18.199246301671458
- type: nauc_map_at_1_diff1
value: 26.23276644969203
- type: nauc_map_at_1_max
value: -12.376511389571245
- type: nauc_map_at_1_std
value: -18.11411715207284
- type: nauc_map_at_20_diff1
value: 22.32455790850922
- type: nauc_map_at_20_max
value: -8.664671547236034
- type: nauc_map_at_20_std
value: -17.8290016125137
- type: nauc_map_at_3_diff1
value: 22.395462147465064
- type: nauc_map_at_3_max
value: -8.206580750918844
- type: nauc_map_at_3_std
value: -17.604490446911484
- type: nauc_map_at_5_diff1
value: 21.95307379904799
- type: nauc_map_at_5_max
value: -8.03958102978443
- type: nauc_map_at_5_std
value: -17.36578866595004
- type: nauc_mrr_at_1000_diff1
value: 20.124236798365587
- type: nauc_mrr_at_1000_max
value: -9.587376069575898
- type: nauc_mrr_at_1000_std
value: -17.79191612151833
- type: nauc_mrr_at_100_diff1
value: 20.123612603474033
- type: nauc_mrr_at_100_max
value: -9.589187218607831
- type: nauc_mrr_at_100_std
value: -17.7981617777748
- type: nauc_mrr_at_10_diff1
value: 19.723683875738075
- type: nauc_mrr_at_10_max
value: -9.774151729178815
- type: nauc_mrr_at_10_std
value: -18.168668675495162
- type: nauc_mrr_at_1_diff1
value: 23.945332059908132
- type: nauc_mrr_at_1_max
value: -12.260461466152819
- type: nauc_mrr_at_1_std
value: -18.007194922921148
- type: nauc_mrr_at_20_diff1
value: 20.04819461810257
- type: nauc_mrr_at_20_max
value: -9.518368283588936
- type: nauc_mrr_at_20_std
value: -17.831608149836136
- type: nauc_mrr_at_3_diff1
value: 19.8571785245832
- type: nauc_mrr_at_3_max
value: -9.464375021240478
- type: nauc_mrr_at_3_std
value: -17.728533927330453
- type: nauc_mrr_at_5_diff1
value: 19.670313652167827
- type: nauc_mrr_at_5_max
value: -8.966372585728434
- type: nauc_mrr_at_5_std
value: -17.468955834324817
- type: nauc_ndcg_at_1000_diff1
value: 21.863049281767417
- type: nauc_ndcg_at_1000_max
value: -8.18698520924057
- type: nauc_ndcg_at_1000_std
value: -17.634483364794804
- type: nauc_ndcg_at_100_diff1
value: 21.849924385738586
- type: nauc_ndcg_at_100_max
value: -8.226437560889345
- type: nauc_ndcg_at_100_std
value: -17.774648478087002
- type: nauc_ndcg_at_10_diff1
value: 19.888395590413573
- type: nauc_ndcg_at_10_max
value: -8.968706085632382
- type: nauc_ndcg_at_10_std
value: -19.31386964628115
- type: nauc_ndcg_at_1_diff1
value: 26.23276644969203
- type: nauc_ndcg_at_1_max
value: -12.376511389571245
- type: nauc_ndcg_at_1_std
value: -18.11411715207284
- type: nauc_ndcg_at_20_diff1
value: 21.38413342416933
- type: nauc_ndcg_at_20_max
value: -7.636238194084164
- type: nauc_ndcg_at_20_std
value: -17.946390844693028
- type: nauc_ndcg_at_3_diff1
value: 21.29169165029195
- type: nauc_ndcg_at_3_max
value: -6.793840499730093
- type: nauc_ndcg_at_3_std
value: -17.52359001586737
- type: nauc_ndcg_at_5_diff1
value: 20.238297656671364
- type: nauc_ndcg_at_5_max
value: -6.424992706950072
- type: nauc_ndcg_at_5_std
value: -17.082391132291356
- type: nauc_precision_at_1000_diff1
value: -7.05195108528572
- type: nauc_precision_at_1000_max
value: 34.439879624882145
- type: nauc_precision_at_1000_std
value: 68.72436351659353
- type: nauc_precision_at_100_diff1
value: -2.769464113932605
- type: nauc_precision_at_100_max
value: 9.89562961226698
- type: nauc_precision_at_100_std
value: -0.5880967482224028
- type: nauc_precision_at_10_diff1
value: 2.1371544726832323
- type: nauc_precision_at_10_max
value: -11.93051325147756
- type: nauc_precision_at_10_std
value: -30.83144187392059
- type: nauc_precision_at_1_diff1
value: 26.23276644969203
- type: nauc_precision_at_1_max
value: -12.376511389571245
- type: nauc_precision_at_1_std
value: -18.11411715207284
- type: nauc_precision_at_20_diff1
value: 3.780146814257504
- type: nauc_precision_at_20_max
value: 17.06527540214615
- type: nauc_precision_at_20_std
value: -20.36832563035565
- type: nauc_precision_at_3_diff1
value: 17.63894384012077
- type: nauc_precision_at_3_max
value: -2.0220490624638887
- type: nauc_precision_at_3_std
value: -17.285601413493918
- type: nauc_precision_at_5_diff1
value: 12.557855071944601
- type: nauc_precision_at_5_max
value: 0.5840236463956658
- type: nauc_precision_at_5_std
value: -15.827224420217846
- type: nauc_recall_at_1000_diff1
value: -7.051951085286463
- type: nauc_recall_at_1000_max
value: 34.43987962487738
- type: nauc_recall_at_1000_std
value: 68.724363516591
- type: nauc_recall_at_100_diff1
value: -2.769464113930314
- type: nauc_recall_at_100_max
value: 9.895629612270017
- type: nauc_recall_at_100_std
value: -0.58809674821745
- type: nauc_recall_at_10_diff1
value: 2.1371544726834495
- type: nauc_recall_at_10_max
value: -11.930513251477253
- type: nauc_recall_at_10_std
value: -30.83144187392047
- type: nauc_recall_at_1_diff1
value: 26.23276644969203
- type: nauc_recall_at_1_max
value: -12.376511389571245
- type: nauc_recall_at_1_std
value: -18.11411715207284
- type: nauc_recall_at_20_diff1
value: 3.7801468142575922
- type: nauc_recall_at_20_max
value: 17.0652754021456
- type: nauc_recall_at_20_std
value: -20.36832563035559
- type: nauc_recall_at_3_diff1
value: 17.63894384012074
- type: nauc_recall_at_3_max
value: -2.02204906246383
- type: nauc_recall_at_3_std
value: -17.28560141349386
- type: nauc_recall_at_5_diff1
value: 12.55785507194463
- type: nauc_recall_at_5_max
value: 0.5840236463957296
- type: nauc_recall_at_5_std
value: -15.827224420217856
- type: ndcg_at_1
value: 40.398
- type: ndcg_at_10
value: 64.24
- type: ndcg_at_100
value: 66.631
- type: ndcg_at_1000
value: 66.65100000000001
- type: ndcg_at_20
value: 66.086
- type: ndcg_at_3
value: 55.938
- type: ndcg_at_5
value: 60.370000000000005
- type: precision_at_1
value: 40.398
- type: precision_at_10
value: 8.962
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.836
- type: precision_at_3
value: 22.262
- type: precision_at_5
value: 15.519
- type: recall_at_1
value: 40.398
- type: recall_at_10
value: 89.616
- type: recall_at_100
value: 99.502
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 96.72800000000001
- type: recall_at_3
value: 66.78500000000001
- type: recall_at_5
value: 77.596
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 55.1564333205451
- type: v_measure
value: 55.1564333205451
- type: v_measure_std
value: 14.696883012214512
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 49.823698316694795
- type: v_measure
value: 49.823698316694795
- type: v_measure_std
value: 14.951660654298186
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 66.15294503553424
- type: map
value: 66.15294503553424
- type: mrr
value: 78.53438420612935
- type: nAUC_map_diff1
value: 12.569697092717997
- type: nAUC_map_max
value: 21.50670312412572
- type: nAUC_map_std
value: 16.943786429229064
- type: nAUC_mrr_diff1
value: 15.590272897361238
- type: nAUC_mrr_max
value: 34.96072022474653
- type: nAUC_mrr_std
value: 21.649217605241045
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 85.7824546319275
- type: cosine_spearman
value: 83.29587385660628
- type: euclidean_pearson
value: 84.58764190565167
- type: euclidean_spearman
value: 83.30069324352772
- type: main_score
value: 83.29587385660628
- type: manhattan_pearson
value: 84.95996839947179
- type: manhattan_spearman
value: 83.87480271054358
- type: pearson
value: 85.7824546319275
- type: spearman
value: 83.29587385660628
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 89.30194805194806
- type: f1
value: 89.26182507266391
- type: f1_weighted
value: 89.26182507266391
- type: main_score
value: 89.30194805194806
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 50.67972171889736
- type: v_measure
value: 50.67972171889736
- type: v_measure_std
value: 0.7687409980036303
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 45.80539715556144
- type: v_measure
value: 45.80539715556144
- type: v_measure_std
value: 0.9601346216579142
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 44.361250000000005
- type: map_at_1
value: 28.304499999999997
- type: map_at_10
value: 38.54841666666666
- type: map_at_100
value: 39.83141666666667
- type: map_at_1000
value: 39.944750000000006
- type: map_at_20
value: 39.25341666666667
- type: map_at_3
value: 35.406749999999995
- type: map_at_5
value: 37.15558333333333
- type: mrr_at_1
value: 34.09077232860122
- type: mrr_at_10
value: 43.15445393211421
- type: mrr_at_100
value: 43.98645286848257
- type: mrr_at_1000
value: 44.037631313469404
- type: mrr_at_20
value: 43.64045813249614
- type: mrr_at_3
value: 40.674138648480486
- type: mrr_at_5
value: 42.106251182620255
- type: nauc_map_at_1000_diff1
value: 46.250011739434996
- type: nauc_map_at_1000_max
value: 30.13664446260598
- type: nauc_map_at_1000_std
value: 5.422301791618935
- type: nauc_map_at_100_diff1
value: 46.253631351999395
- type: nauc_map_at_100_max
value: 30.12612918885181
- type: nauc_map_at_100_std
value: 5.367077019987172
- type: nauc_map_at_10_diff1
value: 46.328171341741346
- type: nauc_map_at_10_max
value: 29.80274612581464
- type: nauc_map_at_10_std
value: 4.62996685176396
- type: nauc_map_at_1_diff1
value: 51.56118117729493
- type: nauc_map_at_1_max
value: 27.94885243863768
- type: nauc_map_at_1_std
value: 1.700366508927356
- type: nauc_map_at_20_diff1
value: 46.286750260299094
- type: nauc_map_at_20_max
value: 29.979205290353278
- type: nauc_map_at_20_std
value: 5.010588412441873
- type: nauc_map_at_3_diff1
value: 47.10018183619064
- type: nauc_map_at_3_max
value: 29.062318206078753
- type: nauc_map_at_3_std
value: 3.2235696254694197
- type: nauc_map_at_5_diff1
value: 46.41971733050039
- type: nauc_map_at_5_max
value: 29.456798617695657
- type: nauc_map_at_5_std
value: 4.0921691023077145
- type: nauc_mrr_at_1000_diff1
value: 45.88888977975723
- type: nauc_mrr_at_1000_max
value: 32.162138978089544
- type: nauc_mrr_at_1000_std
value: 6.2811943424217915
- type: nauc_mrr_at_100_diff1
value: 45.87480433011124
- type: nauc_mrr_at_100_max
value: 32.16011334212834
- type: nauc_mrr_at_100_std
value: 6.2865717772421785
- type: nauc_mrr_at_10_diff1
value: 45.849652904658825
- type: nauc_mrr_at_10_max
value: 32.13847916232293
- type: nauc_mrr_at_10_std
value: 6.105718728141999
- type: nauc_mrr_at_1_diff1
value: 51.013730325062156
- type: nauc_mrr_at_1_max
value: 32.77457396492779
- type: nauc_mrr_at_1_std
value: 4.415684893471724
- type: nauc_mrr_at_20_diff1
value: 45.86663046255274
- type: nauc_mrr_at_20_max
value: 32.15219360697865
- type: nauc_mrr_at_20_std
value: 6.19603046412763
- type: nauc_mrr_at_3_diff1
value: 46.522376582423185
- type: nauc_mrr_at_3_max
value: 32.18259009733714
- type: nauc_mrr_at_3_std
value: 5.288000648220897
- type: nauc_mrr_at_5_diff1
value: 45.86611481369745
- type: nauc_mrr_at_5_max
value: 32.14261639054921
- type: nauc_mrr_at_5_std
value: 5.8811238177073735
- type: nauc_ndcg_at_1000_diff1
value: 44.5055097547565
- type: nauc_ndcg_at_1000_max
value: 31.149682057975458
- type: nauc_ndcg_at_1000_std
value: 8.157937194901333
- type: nauc_ndcg_at_100_diff1
value: 44.12398363638596
- type: nauc_ndcg_at_100_max
value: 30.878064321409994
- type: nauc_ndcg_at_100_std
value: 8.40493441452808
- type: nauc_ndcg_at_10_diff1
value: 44.200093505221474
- type: nauc_ndcg_at_10_max
value: 30.15267107733158
- type: nauc_ndcg_at_10_std
value: 6.407495361566107
- type: nauc_ndcg_at_1_diff1
value: 51.013730325062156
- type: nauc_ndcg_at_1_max
value: 32.77457396492779
- type: nauc_ndcg_at_1_std
value: 4.415684893471724
- type: nauc_ndcg_at_20_diff1
value: 44.16988321564116
- type: nauc_ndcg_at_20_max
value: 30.333532500651213
- type: nauc_ndcg_at_20_std
value: 7.10024701386895
- type: nauc_ndcg_at_3_diff1
value: 45.35982873879988
- type: nauc_ndcg_at_3_max
value: 30.288312457948702
- type: nauc_ndcg_at_3_std
value: 4.653900898293395
- type: nauc_ndcg_at_5_diff1
value: 44.324558115380185
- type: nauc_ndcg_at_5_max
value: 30.048149698941373
- type: nauc_ndcg_at_5_std
value: 5.6684459618413205
- type: nauc_precision_at_1000_diff1
value: -7.282175798304458
- type: nauc_precision_at_1000_max
value: 7.820142031765352
- type: nauc_precision_at_1000_std
value: 11.736131836431172
- type: nauc_precision_at_100_diff1
value: 1.0222940256506976
- type: nauc_precision_at_100_max
value: 16.12346497070298
- type: nauc_precision_at_100_std
value: 18.202607395247874
- type: nauc_precision_at_10_diff1
value: 18.289439185857837
- type: nauc_precision_at_10_max
value: 26.116517399154375
- type: nauc_precision_at_10_std
value: 13.921214069982302
- type: nauc_precision_at_1_diff1
value: 51.013730325062156
- type: nauc_precision_at_1_max
value: 32.77457396492779
- type: nauc_precision_at_1_std
value: 4.415684893471724
- type: nauc_precision_at_20_diff1
value: 12.365165405210886
- type: nauc_precision_at_20_max
value: 22.946297258937367
- type: nauc_precision_at_20_std
value: 16.13862870358933
- type: nauc_precision_at_3_diff1
value: 32.063423642849685
- type: nauc_precision_at_3_max
value: 30.140965811989407
- type: nauc_precision_at_3_std
value: 8.501746262550146
- type: nauc_precision_at_5_diff1
value: 24.777203357717948
- type: nauc_precision_at_5_max
value: 28.401579566848472
- type: nauc_precision_at_5_std
value: 11.643246774390914
- type: nauc_recall_at_1000_diff1
value: 30.04216463401409
- type: nauc_recall_at_1000_max
value: 34.98067760563842
- type: nauc_recall_at_1000_std
value: 48.01453905250591
- type: nauc_recall_at_100_diff1
value: 31.193415507513972
- type: nauc_recall_at_100_max
value: 28.69740149270981
- type: nauc_recall_at_100_std
value: 25.20960758920368
- type: nauc_recall_at_10_diff1
value: 36.18870823636506
- type: nauc_recall_at_10_max
value: 26.005625231341238
- type: nauc_recall_at_10_std
value: 8.891983977041376
- type: nauc_recall_at_1_diff1
value: 51.56118117729493
- type: nauc_recall_at_1_max
value: 27.94885243863768
- type: nauc_recall_at_1_std
value: 1.700366508927356
- type: nauc_recall_at_20_diff1
value: 34.93996118564803
- type: nauc_recall_at_20_max
value: 26.149961715956138
- type: nauc_recall_at_20_std
value: 12.0657502367633
- type: nauc_recall_at_3_diff1
value: 40.80743946709512
- type: nauc_recall_at_3_max
value: 26.443127773025783
- type: nauc_recall_at_3_std
value: 3.7011448604241477
- type: nauc_recall_at_5_diff1
value: 37.608535157055776
- type: nauc_recall_at_5_max
value: 26.168016189725822
- type: nauc_recall_at_5_std
value: 6.344191564595316
- type: ndcg_at_1
value: 34.09083333333333
- type: ndcg_at_10
value: 44.361250000000005
- type: ndcg_at_100
value: 49.586166666666664
- type: ndcg_at_1000
value: 51.623583333333336
- type: ndcg_at_20
value: 46.40158333333333
- type: ndcg_at_3
value: 39.27733333333333
- type: ndcg_at_5
value: 41.662333333333336
- type: precision_at_1
value: 34.09083333333333
- type: precision_at_10
value: 7.957000000000002
- type: precision_at_100
value: 1.2521666666666669
- type: precision_at_1000
value: 0.16125
- type: precision_at_20
value: 4.6755
- type: precision_at_3
value: 18.402083333333334
- type: precision_at_5
value: 13.104333333333335
- type: recall_at_1
value: 28.304499999999997
- type: recall_at_10
value: 56.80666666666667
- type: recall_at_100
value: 79.66208333333334
- type: recall_at_1000
value: 93.6455
- type: recall_at_20
value: 64.2495
- type: recall_at_3
value: 42.431333333333335
- type: recall_at_5
value: 48.665416666666665
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 43.525999999999996
- type: map_at_1
value: 19.291
- type: map_at_10
value: 33.471000000000004
- type: map_at_100
value: 35.388999999999996
- type: map_at_1000
value: 35.568
- type: map_at_20
value: 34.496
- type: map_at_3
value: 28.713
- type: map_at_5
value: 31.384
- type: mrr_at_1
value: 43.77850162866449
- type: mrr_at_10
value: 56.28576598934912
- type: mrr_at_100
value: 56.8588518168194
- type: mrr_at_1000
value: 56.878236725973544
- type: mrr_at_20
value: 56.6409328120183
- type: mrr_at_3
value: 53.56134636264935
- type: mrr_at_5
value: 55.27795874049956
- type: nauc_map_at_1000_diff1
value: 27.262513153363876
- type: nauc_map_at_1000_max
value: 40.099398684385584
- type: nauc_map_at_1000_std
value: 18.847812394005512
- type: nauc_map_at_100_diff1
value: 27.238993503030745
- type: nauc_map_at_100_max
value: 40.07730434492169
- type: nauc_map_at_100_std
value: 18.795349250833684
- type: nauc_map_at_10_diff1
value: 27.70929180366227
- type: nauc_map_at_10_max
value: 39.55987024970173
- type: nauc_map_at_10_std
value: 17.214881544648996
- type: nauc_map_at_1_diff1
value: 43.34155892182403
- type: nauc_map_at_1_max
value: 38.23324890148018
- type: nauc_map_at_1_std
value: 6.0781444393516075
- type: nauc_map_at_20_diff1
value: 27.311577477800103
- type: nauc_map_at_20_max
value: 39.624414083413456
- type: nauc_map_at_20_std
value: 18.149811054163287
- type: nauc_map_at_3_diff1
value: 30.475965062734367
- type: nauc_map_at_3_max
value: 38.49324825043695
- type: nauc_map_at_3_std
value: 13.357656038648487
- type: nauc_map_at_5_diff1
value: 28.425110095017747
- type: nauc_map_at_5_max
value: 39.017894870747796
- type: nauc_map_at_5_std
value: 15.543817194122564
- type: nauc_mrr_at_1000_diff1
value: 33.16689354701644
- type: nauc_mrr_at_1000_max
value: 41.70755363247148
- type: nauc_mrr_at_1000_std
value: 24.61667417463176
- type: nauc_mrr_at_100_diff1
value: 33.147229262917506
- type: nauc_mrr_at_100_max
value: 41.712455697170725
- type: nauc_mrr_at_100_std
value: 24.6418922043652
- type: nauc_mrr_at_10_diff1
value: 32.94185191112572
- type: nauc_mrr_at_10_max
value: 41.64272730141954
- type: nauc_mrr_at_10_std
value: 24.663391015702707
- type: nauc_mrr_at_1_diff1
value: 39.571969559016395
- type: nauc_mrr_at_1_max
value: 39.396249211263495
- type: nauc_mrr_at_1_std
value: 16.984149923258357
- type: nauc_mrr_at_20_diff1
value: 33.10040770334742
- type: nauc_mrr_at_20_max
value: 41.807565560083034
- type: nauc_mrr_at_20_std
value: 24.8064180365271
- type: nauc_mrr_at_3_diff1
value: 33.065406161485704
- type: nauc_mrr_at_3_max
value: 41.049510969934694
- type: nauc_mrr_at_3_std
value: 23.18371458928609
- type: nauc_mrr_at_5_diff1
value: 33.2389593543916
- type: nauc_mrr_at_5_max
value: 41.629486918949915
- type: nauc_mrr_at_5_std
value: 24.5777253036149
- type: nauc_ndcg_at_1000_diff1
value: 25.868840609197637
- type: nauc_ndcg_at_1000_max
value: 42.79564910784761
- type: nauc_ndcg_at_1000_std
value: 27.035091271680113
- type: nauc_ndcg_at_100_diff1
value: 25.019789319579942
- type: nauc_ndcg_at_100_max
value: 42.482345143533735
- type: nauc_ndcg_at_100_std
value: 26.76872010731345
- type: nauc_ndcg_at_10_diff1
value: 25.949464660653238
- type: nauc_ndcg_at_10_max
value: 40.79769544643906
- type: nauc_ndcg_at_10_std
value: 22.486116508973204
- type: nauc_ndcg_at_1_diff1
value: 39.571969559016395
- type: nauc_ndcg_at_1_max
value: 39.396249211263495
- type: nauc_ndcg_at_1_std
value: 16.984149923258357
- type: nauc_ndcg_at_20_diff1
value: 25.173455685962214
- type: nauc_ndcg_at_20_max
value: 40.88873540662413
- type: nauc_ndcg_at_20_std
value: 24.4451041955519
- type: nauc_ndcg_at_3_diff1
value: 28.185416070726333
- type: nauc_ndcg_at_3_max
value: 39.10600031163912
- type: nauc_ndcg_at_3_std
value: 18.42694044215541
- type: nauc_ndcg_at_5_diff1
value: 27.112647584005583
- type: nauc_ndcg_at_5_max
value: 40.154045682322526
- type: nauc_ndcg_at_5_std
value: 20.26822517176828
- type: nauc_precision_at_1000_diff1
value: -16.42087927044017
- type: nauc_precision_at_1000_max
value: 3.5326295053913
- type: nauc_precision_at_1000_std
value: 24.406810708493197
- type: nauc_precision_at_100_diff1
value: -12.17648135724982
- type: nauc_precision_at_100_max
value: 15.895489260126183
- type: nauc_precision_at_100_std
value: 32.48346122610907
- type: nauc_precision_at_10_diff1
value: -1.2493131347748072
- type: nauc_precision_at_10_max
value: 26.409459305604376
- type: nauc_precision_at_10_std
value: 31.115432019300016
- type: nauc_precision_at_1_diff1
value: 39.571969559016395
- type: nauc_precision_at_1_max
value: 39.396249211263495
- type: nauc_precision_at_1_std
value: 16.984149923258357
- type: nauc_precision_at_20_diff1
value: -6.597509397240593
- type: nauc_precision_at_20_max
value: 21.461984620659695
- type: nauc_precision_at_20_std
value: 32.9450259748889
- type: nauc_precision_at_3_diff1
value: 9.46378764865453
- type: nauc_precision_at_3_max
value: 32.03650819375425
- type: nauc_precision_at_3_std
value: 26.489382638510765
- type: nauc_precision_at_5_diff1
value: 3.5987036728169537
- type: nauc_precision_at_5_max
value: 30.633955978579703
- type: nauc_precision_at_5_std
value: 30.532430088014443
- type: nauc_recall_at_1000_diff1
value: 10.714633106872254
- type: nauc_recall_at_1000_max
value: 43.94958623961
- type: nauc_recall_at_1000_std
value: 51.78914468954123
- type: nauc_recall_at_100_diff1
value: 9.63781472255557
- type: nauc_recall_at_100_max
value: 38.50917465255336
- type: nauc_recall_at_100_std
value: 37.78623984642377
- type: nauc_recall_at_10_diff1
value: 16.480342820841688
- type: nauc_recall_at_10_max
value: 35.982566867357406
- type: nauc_recall_at_10_std
value: 23.30688188788895
- type: nauc_recall_at_1_diff1
value: 43.34155892182403
- type: nauc_recall_at_1_max
value: 38.23324890148018
- type: nauc_recall_at_1_std
value: 6.0781444393516075
- type: nauc_recall_at_20_diff1
value: 13.521048985146367
- type: nauc_recall_at_20_max
value: 34.62462209239834
- type: nauc_recall_at_20_std
value: 27.85924191501618
- type: nauc_recall_at_3_diff1
value: 23.57032748533523
- type: nauc_recall_at_3_max
value: 36.32703197635613
- type: nauc_recall_at_3_std
value: 15.730238734014337
- type: nauc_recall_at_5_diff1
value: 19.61387036368584
- type: nauc_recall_at_5_max
value: 36.22030835529556
- type: nauc_recall_at_5_std
value: 19.76310648649897
- type: ndcg_at_1
value: 43.779
- type: ndcg_at_10
value: 43.525999999999996
- type: ndcg_at_100
value: 50.138000000000005
- type: ndcg_at_1000
value: 52.991
- type: ndcg_at_20
value: 46.083
- type: ndcg_at_3
value: 38.002
- type: ndcg_at_5
value: 39.842
- type: precision_at_1
value: 43.779
- type: precision_at_10
value: 13.205
- type: precision_at_100
value: 2.051
- type: precision_at_1000
value: 0.259
- type: precision_at_20
value: 7.722999999999999
- type: precision_at_3
value: 28.903000000000002
- type: precision_at_5
value: 21.368000000000002
- type: recall_at_1
value: 19.291
- type: recall_at_10
value: 48.754
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 86.611
- type: recall_at_20
value: 55.884
- type: recall_at_3
value: 34.101
- type: recall_at_5
value: 40.784
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 49.884
- type: map_at_1
value: 9.913
- type: map_at_10
value: 23.186999999999998
- type: map_at_100
value: 34.207
- type: map_at_1000
value: 36.318
- type: map_at_20
value: 27.419
- type: map_at_3
value: 15.656
- type: map_at_5
value: 18.945999999999998
- type: mrr_at_1
value: 75.75
- type: mrr_at_10
value: 82.16279761904761
- type: mrr_at_100
value: 82.48445635330299
- type: mrr_at_1000
value: 82.4870246719901
- type: mrr_at_20
value: 82.36203632968338
- type: mrr_at_3
value: 81.29166666666666
- type: mrr_at_5
value: 82.02916666666667
- type: nauc_map_at_1000_diff1
value: 17.0739966990996
- type: nauc_map_at_1000_max
value: 28.440065298437133
- type: nauc_map_at_1000_std
value: 20.83498154003865
- type: nauc_map_at_100_diff1
value: 17.75982086107111
- type: nauc_map_at_100_max
value: 26.87850835673573
- type: nauc_map_at_100_std
value: 18.350282298599275
- type: nauc_map_at_10_diff1
value: 17.15984258564116
- type: nauc_map_at_10_max
value: 10.846179132675553
- type: nauc_map_at_10_std
value: -6.263534464094614
- type: nauc_map_at_1_diff1
value: 24.014897777973694
- type: nauc_map_at_1_max
value: -4.556638938723358
- type: nauc_map_at_1_std
value: -22.7844467526989
- type: nauc_map_at_20_diff1
value: 16.3179372493187
- type: nauc_map_at_20_max
value: 17.176378915498915
- type: nauc_map_at_20_std
value: 1.9378637630340372
- type: nauc_map_at_3_diff1
value: 19.12786794046792
- type: nauc_map_at_3_max
value: 0.09063919305677291
- type: nauc_map_at_3_std
value: -16.713143158330492
- type: nauc_map_at_5_diff1
value: 18.76504725420023
- type: nauc_map_at_5_max
value: 5.040867712207419
- type: nauc_map_at_5_std
value: -12.382578318931165
- type: nauc_mrr_at_1000_diff1
value: 54.61266255011247
- type: nauc_mrr_at_1000_max
value: 60.83961280977112
- type: nauc_mrr_at_1000_std
value: 32.70429260443016
- type: nauc_mrr_at_100_diff1
value: 54.61346236538542
- type: nauc_mrr_at_100_max
value: 60.8407974416647
- type: nauc_mrr_at_100_std
value: 32.69272843993462
- type: nauc_mrr_at_10_diff1
value: 54.74633685810871
- type: nauc_mrr_at_10_max
value: 61.084525933097865
- type: nauc_mrr_at_10_std
value: 33.001220210025565
- type: nauc_mrr_at_1_diff1
value: 56.12708423835806
- type: nauc_mrr_at_1_max
value: 58.9314540998289
- type: nauc_mrr_at_1_std
value: 27.39422607651012
- type: nauc_mrr_at_20_diff1
value: 54.58896150245695
- type: nauc_mrr_at_20_max
value: 60.890929983464815
- type: nauc_mrr_at_20_std
value: 32.65559641276393
- type: nauc_mrr_at_3_diff1
value: 54.38229071443791
- type: nauc_mrr_at_3_max
value: 59.987849044098596
- type: nauc_mrr_at_3_std
value: 33.439813880719974
- type: nauc_mrr_at_5_diff1
value: 54.961790262449824
- type: nauc_mrr_at_5_max
value: 61.17705173908951
- type: nauc_mrr_at_5_std
value: 33.30939850734856
- type: nauc_ndcg_at_1000_diff1
value: 29.27465932507067
- type: nauc_ndcg_at_1000_max
value: 47.952543312315214
- type: nauc_ndcg_at_1000_std
value: 36.17132236391485
- type: nauc_ndcg_at_100_diff1
value: 28.63072328980134
- type: nauc_ndcg_at_100_max
value: 41.460833419186564
- type: nauc_ndcg_at_100_std
value: 27.157100358988135
- type: nauc_ndcg_at_10_diff1
value: 23.41488013023301
- type: nauc_ndcg_at_10_max
value: 39.27798133072349
- type: nauc_ndcg_at_10_std
value: 21.979241438928312
- type: nauc_ndcg_at_1_diff1
value: 46.12120543657642
- type: nauc_ndcg_at_1_max
value: 47.28452124039853
- type: nauc_ndcg_at_1_std
value: 19.799884708952543
- type: nauc_ndcg_at_20_diff1
value: 23.627669045115574
- type: nauc_ndcg_at_20_max
value: 35.88225062457673
- type: nauc_ndcg_at_20_std
value: 18.218628030529498
- type: nauc_ndcg_at_3_diff1
value: 25.37309228946118
- type: nauc_ndcg_at_3_max
value: 40.64426332992231
- type: nauc_ndcg_at_3_std
value: 24.608330645901482
- type: nauc_ndcg_at_5_diff1
value: 24.055798594999654
- type: nauc_ndcg_at_5_max
value: 41.16180524175431
- type: nauc_ndcg_at_5_std
value: 24.048305528761315
- type: nauc_precision_at_1000_diff1
value: -18.234943251015576
- type: nauc_precision_at_1000_max
value: 0.48708502364659184
- type: nauc_precision_at_1000_std
value: 2.4473601543134027
- type: nauc_precision_at_100_diff1
value: -3.0077810947381227
- type: nauc_precision_at_100_max
value: 25.27249321108913
- type: nauc_precision_at_100_std
value: 37.36575792126928
- type: nauc_precision_at_10_diff1
value: -0.2393778190297635
- type: nauc_precision_at_10_max
value: 36.40513293547299
- type: nauc_precision_at_10_std
value: 37.4827885766009
- type: nauc_precision_at_1_diff1
value: 56.12708423835806
- type: nauc_precision_at_1_max
value: 58.9314540998289
- type: nauc_precision_at_1_std
value: 27.39422607651012
- type: nauc_precision_at_20_diff1
value: -1.2010133229402933
- type: nauc_precision_at_20_max
value: 34.117541814385966
- type: nauc_precision_at_20_std
value: 39.13273254177449
- type: nauc_precision_at_3_diff1
value: 11.757378092198486
- type: nauc_precision_at_3_max
value: 42.637962482588875
- type: nauc_precision_at_3_std
value: 37.42465077352342
- type: nauc_precision_at_5_diff1
value: 7.233177203405101
- type: nauc_precision_at_5_max
value: 43.1663582897407
- type: nauc_precision_at_5_std
value: 38.848449220750055
- type: nauc_recall_at_1000_diff1
value: 27.33938551969145
- type: nauc_recall_at_1000_max
value: 45.5614254479334
- type: nauc_recall_at_1000_std
value: 50.58528916250458
- type: nauc_recall_at_100_diff1
value: 23.610383761920097
- type: nauc_recall_at_100_max
value: 31.422168485847184
- type: nauc_recall_at_100_std
value: 25.58649926458304
- type: nauc_recall_at_10_diff1
value: 14.62495111808408
- type: nauc_recall_at_10_max
value: 7.4295041277681095
- type: nauc_recall_at_10_std
value: -9.32297089600654
- type: nauc_recall_at_1_diff1
value: 24.014897777973694
- type: nauc_recall_at_1_max
value: -4.556638938723358
- type: nauc_recall_at_1_std
value: -22.7844467526989
- type: nauc_recall_at_20_diff1
value: 14.027862330014662
- type: nauc_recall_at_20_max
value: 12.437478731690844
- type: nauc_recall_at_20_std
value: -3.0740743798103676
- type: nauc_recall_at_3_diff1
value: 16.354018356566712
- type: nauc_recall_at_3_max
value: -2.9812231240997917
- type: nauc_recall_at_3_std
value: -18.27746460743442
- type: nauc_recall_at_5_diff1
value: 16.81486583473587
- type: nauc_recall_at_5_max
value: 2.420128513974744
- type: nauc_recall_at_5_std
value: -14.441820321214108
- type: ndcg_at_1
value: 63.87500000000001
- type: ndcg_at_10
value: 49.884
- type: ndcg_at_100
value: 54.738
- type: ndcg_at_1000
value: 61.635
- type: ndcg_at_20
value: 48.894999999999996
- type: ndcg_at_3
value: 54.287
- type: ndcg_at_5
value: 52.40899999999999
- type: precision_at_1
value: 75.75
- type: precision_at_10
value: 40.9
- type: precision_at_100
value: 13.139999999999999
- type: precision_at_1000
value: 2.533
- type: precision_at_20
value: 30.8
- type: precision_at_3
value: 57.667
- type: precision_at_5
value: 51.05
- type: recall_at_1
value: 9.913
- type: recall_at_10
value: 28.591
- type: recall_at_100
value: 61.017999999999994
- type: recall_at_1000
value: 83.383
- type: recall_at_20
value: 37.834
- type: recall_at_3
value: 17.049
- type: recall_at_5
value: 21.685
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 78.77499999999999
- type: f1
value: 73.74058240799386
- type: f1_weighted
value: 79.78804377638227
- type: main_score
value: 78.77499999999999
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 90.986
- type: map_at_1
value: 81.601
- type: map_at_10
value: 88.242
- type: map_at_100
value: 88.46000000000001
- type: map_at_1000
value: 88.472
- type: map_at_20
value: 88.375
- type: map_at_3
value: 87.237
- type: map_at_5
value: 87.85300000000001
- type: mrr_at_1
value: 87.81878187818782
- type: mrr_at_10
value: 92.20301196786335
- type: mrr_at_100
value: 92.24884236673292
- type: mrr_at_1000
value: 92.2496338899362
- type: mrr_at_20
value: 92.23112073283473
- type: mrr_at_3
value: 91.77417741774165
- type: mrr_at_5
value: 92.03970397039689
- type: nauc_map_at_1000_diff1
value: 56.54670664910505
- type: nauc_map_at_1000_max
value: 33.08375749975477
- type: nauc_map_at_1000_std
value: 2.7491595418252865
- type: nauc_map_at_100_diff1
value: 56.50887688686924
- type: nauc_map_at_100_max
value: 33.075487189958494
- type: nauc_map_at_100_std
value: 2.7675869969253375
- type: nauc_map_at_10_diff1
value: 56.08080806610569
- type: nauc_map_at_10_max
value: 32.776972098819066
- type: nauc_map_at_10_std
value: 2.5904846711290097
- type: nauc_map_at_1_diff1
value: 60.645344065853145
- type: nauc_map_at_1_max
value: 31.232776777514797
- type: nauc_map_at_1_std
value: -1.1946138176109171
- type: nauc_map_at_20_diff1
value: 56.28378454162355
- type: nauc_map_at_20_max
value: 32.98207150385811
- type: nauc_map_at_20_std
value: 2.8469814040214025
- type: nauc_map_at_3_diff1
value: 55.81958007095375
- type: nauc_map_at_3_max
value: 31.602707711038313
- type: nauc_map_at_3_std
value: 0.8117019292273401
- type: nauc_map_at_5_diff1
value: 55.706025752316535
- type: nauc_map_at_5_max
value: 32.16032683604737
- type: nauc_map_at_5_std
value: 1.8853201503498669
- type: nauc_mrr_at_1000_diff1
value: 75.4997173366251
- type: nauc_mrr_at_1000_max
value: 41.49117135484116
- type: nauc_mrr_at_1000_std
value: -2.0636172883680852
- type: nauc_mrr_at_100_diff1
value: 75.50118860648519
- type: nauc_mrr_at_100_max
value: 41.49490161517194
- type: nauc_mrr_at_100_std
value: -2.057024385178682
- type: nauc_mrr_at_10_diff1
value: 75.47295153099428
- type: nauc_mrr_at_10_max
value: 41.55003304042536
- type: nauc_mrr_at_10_std
value: -2.0353663198929253
- type: nauc_mrr_at_1_diff1
value: 76.632058433229
- type: nauc_mrr_at_1_max
value: 39.754483718891656
- type: nauc_mrr_at_1_std
value: -2.962241058101701
- type: nauc_mrr_at_20_diff1
value: 75.47221882396194
- type: nauc_mrr_at_20_max
value: 41.50779280480839
- type: nauc_mrr_at_20_std
value: -1.9620212266426307
- type: nauc_mrr_at_3_diff1
value: 75.5682297897137
- type: nauc_mrr_at_3_max
value: 41.53543801506081
- type: nauc_mrr_at_3_std
value: -3.391681195945978
- type: nauc_mrr_at_5_diff1
value: 75.37562775183947
- type: nauc_mrr_at_5_max
value: 41.42028509006753
- type: nauc_mrr_at_5_std
value: -2.418698675622726
- type: nauc_ndcg_at_1000_diff1
value: 59.364557011624
- type: nauc_ndcg_at_1000_max
value: 35.4112238125149
- type: nauc_ndcg_at_1000_std
value: 3.717516193303376
- type: nauc_ndcg_at_100_diff1
value: 58.55706703023122
- type: nauc_ndcg_at_100_max
value: 35.352285999934594
- type: nauc_ndcg_at_100_std
value: 4.273437944266781
- type: nauc_ndcg_at_10_diff1
value: 56.77422701267037
- type: nauc_ndcg_at_10_max
value: 34.24909893882957
- type: nauc_ndcg_at_10_std
value: 4.178151434006727
- type: nauc_ndcg_at_1_diff1
value: 76.632058433229
- type: nauc_ndcg_at_1_max
value: 39.754483718891656
- type: nauc_ndcg_at_1_std
value: -2.962241058101701
- type: nauc_ndcg_at_20_diff1
value: 57.27343398231262
- type: nauc_ndcg_at_20_max
value: 34.7416626740278
- type: nauc_ndcg_at_20_std
value: 4.955858766014002
- type: nauc_ndcg_at_3_diff1
value: 57.69267803121093
- type: nauc_ndcg_at_3_max
value: 33.13744317023105
- type: nauc_ndcg_at_3_std
value: 0.40380284030057023
- type: nauc_ndcg_at_5_diff1
value: 56.57461019113917
- type: nauc_ndcg_at_5_max
value: 33.244657840804386
- type: nauc_ndcg_at_5_std
value: 2.5121440827702046
- type: nauc_precision_at_1000_diff1
value: -14.54492513449718
- type: nauc_precision_at_1000_max
value: -5.94552147573623
- type: nauc_precision_at_1000_std
value: 1.2446209816057374
- type: nauc_precision_at_100_diff1
value: -15.452676132568344
- type: nauc_precision_at_100_max
value: -3.760241749847617
- type: nauc_precision_at_100_std
value: 4.623534605290865
- type: nauc_precision_at_10_diff1
value: -12.712908026086176
- type: nauc_precision_at_10_max
value: 0.45241316994816805
- type: nauc_precision_at_10_std
value: 7.849478570138391
- type: nauc_precision_at_1_diff1
value: 76.632058433229
- type: nauc_precision_at_1_max
value: 39.754483718891656
- type: nauc_precision_at_1_std
value: -2.962241058101701
- type: nauc_precision_at_20_diff1
value: -14.514618673172041
- type: nauc_precision_at_20_max
value: -1.113635490621818
- type: nauc_precision_at_20_std
value: 8.599811730457576
- type: nauc_precision_at_3_diff1
value: 6.1367799850003815
- type: nauc_precision_at_3_max
value: 8.466271950897857
- type: nauc_precision_at_3_std
value: 1.7458051543195068
- type: nauc_precision_at_5_diff1
value: -5.804548945783379
- type: nauc_precision_at_5_max
value: 3.4060251839074818
- type: nauc_precision_at_5_std
value: 5.583410511782371
- type: nauc_recall_at_1000_diff1
value: 19.329432953574095
- type: nauc_recall_at_1000_max
value: 43.260442595158736
- type: nauc_recall_at_1000_std
value: 53.89644660661804
- type: nauc_recall_at_100_diff1
value: 21.265326296051235
- type: nauc_recall_at_100_max
value: 38.573000195373695
- type: nauc_recall_at_100_std
value: 42.169391082152785
- type: nauc_recall_at_10_diff1
value: 29.785129558987432
- type: nauc_recall_at_10_max
value: 28.379657867558034
- type: nauc_recall_at_10_std
value: 21.132574624091973
- type: nauc_recall_at_1_diff1
value: 60.645344065853145
- type: nauc_recall_at_1_max
value: 31.232776777514797
- type: nauc_recall_at_1_std
value: -1.1946138176109171
- type: nauc_recall_at_20_diff1
value: 25.88845612373954
- type: nauc_recall_at_20_max
value: 30.24785945821152
- type: nauc_recall_at_20_std
value: 31.73911437468067
- type: nauc_recall_at_3_diff1
value: 42.2968464797395
- type: nauc_recall_at_3_max
value: 26.494318009870018
- type: nauc_recall_at_3_std
value: 2.6045977160467544
- type: nauc_recall_at_5_diff1
value: 35.81340094401374
- type: nauc_recall_at_5_max
value: 25.91082947510634
- type: nauc_recall_at_5_std
value: 9.759404930864779
- type: ndcg_at_1
value: 87.819
- type: ndcg_at_10
value: 90.986
- type: ndcg_at_100
value: 91.69
- type: ndcg_at_1000
value: 91.863
- type: ndcg_at_20
value: 91.293
- type: ndcg_at_3
value: 89.621
- type: ndcg_at_5
value: 90.333
- type: precision_at_1
value: 87.819
- type: precision_at_10
value: 10.753
- type: precision_at_100
value: 1.138
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 5.4879999999999995
- type: precision_at_3
value: 33.703
- type: precision_at_5
value: 20.831
- type: recall_at_1
value: 81.601
- type: recall_at_10
value: 95.44200000000001
- type: recall_at_100
value: 98.14399999999999
- type: recall_at_1000
value: 99.157
- type: recall_at_20
value: 96.43
- type: recall_at_3
value: 91.729
- type: recall_at_5
value: 93.552
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 56.056
- type: map_at_1
value: 28.666000000000004
- type: map_at_10
value: 47.437000000000005
- type: map_at_100
value: 49.537
- type: map_at_1000
value: 49.665
- type: map_at_20
value: 48.618
- type: map_at_3
value: 41.355
- type: map_at_5
value: 44.525
- type: mrr_at_1
value: 55.55555555555556
- type: mrr_at_10
value: 63.705173427395614
- type: mrr_at_100
value: 64.25449940779741
- type: mrr_at_1000
value: 64.27635581092147
- type: mrr_at_20
value: 64.03796029079103
- type: mrr_at_3
value: 61.49691358024688
- type: mrr_at_5
value: 62.73148148148143
- type: nauc_map_at_1000_diff1
value: 43.24282910397747
- type: nauc_map_at_1000_max
value: 28.506093180265644
- type: nauc_map_at_1000_std
value: -13.040508386155054
- type: nauc_map_at_100_diff1
value: 43.23650442904607
- type: nauc_map_at_100_max
value: 28.470565635459156
- type: nauc_map_at_100_std
value: -12.988098780714935
- type: nauc_map_at_10_diff1
value: 43.393840733087686
- type: nauc_map_at_10_max
value: 26.637302062720153
- type: nauc_map_at_10_std
value: -14.47500292113762
- type: nauc_map_at_1_diff1
value: 47.705150227211725
- type: nauc_map_at_1_max
value: 15.354189686550129
- type: nauc_map_at_1_std
value: -14.559819859039067
- type: nauc_map_at_20_diff1
value: 43.14121075706104
- type: nauc_map_at_20_max
value: 27.811170590408395
- type: nauc_map_at_20_std
value: -13.459413585283583
- type: nauc_map_at_3_diff1
value: 44.33938667720801
- type: nauc_map_at_3_max
value: 21.785619884549398
- type: nauc_map_at_3_std
value: -15.569980103071593
- type: nauc_map_at_5_diff1
value: 43.39280905665027
- type: nauc_map_at_5_max
value: 25.021492190645017
- type: nauc_map_at_5_std
value: -14.48856622187443
- type: nauc_mrr_at_1000_diff1
value: 52.971563939946286
- type: nauc_mrr_at_1000_max
value: 38.88019486172324
- type: nauc_mrr_at_1000_std
value: -12.412991642381616
- type: nauc_mrr_at_100_diff1
value: 52.978468139876945
- type: nauc_mrr_at_100_max
value: 38.89751787948751
- type: nauc_mrr_at_100_std
value: -12.3677876252269
- type: nauc_mrr_at_10_diff1
value: 52.78507148048174
- type: nauc_mrr_at_10_max
value: 38.55079809310022
- type: nauc_mrr_at_10_std
value: -12.944127025078755
- type: nauc_mrr_at_1_diff1
value: 55.52626805861546
- type: nauc_mrr_at_1_max
value: 40.49306809164979
- type: nauc_mrr_at_1_std
value: -12.886607701317681
- type: nauc_mrr_at_20_diff1
value: 52.9592152665678
- type: nauc_mrr_at_20_max
value: 38.88514014589964
- type: nauc_mrr_at_20_std
value: -12.434464359819444
- type: nauc_mrr_at_3_diff1
value: 52.73696844091174
- type: nauc_mrr_at_3_max
value: 38.61018727252859
- type: nauc_mrr_at_3_std
value: -13.123989867364166
- type: nauc_mrr_at_5_diff1
value: 53.037110010188
- type: nauc_mrr_at_5_max
value: 38.44770729849151
- type: nauc_mrr_at_5_std
value: -13.49318771828972
- type: nauc_ndcg_at_1000_diff1
value: 44.73813840091289
- type: nauc_ndcg_at_1000_max
value: 33.70113904685389
- type: nauc_ndcg_at_1000_std
value: -10.328687058192742
- type: nauc_ndcg_at_100_diff1
value: 44.595174119928835
- type: nauc_ndcg_at_100_max
value: 33.4788285112467
- type: nauc_ndcg_at_100_std
value: -8.695355259716946
- type: nauc_ndcg_at_10_diff1
value: 44.39837225263
- type: nauc_ndcg_at_10_max
value: 29.188289725593393
- type: nauc_ndcg_at_10_std
value: -13.67608323673103
- type: nauc_ndcg_at_1_diff1
value: 55.52626805861546
- type: nauc_ndcg_at_1_max
value: 40.49306809164979
- type: nauc_ndcg_at_1_std
value: -12.886607701317681
- type: nauc_ndcg_at_20_diff1
value: 44.24661739902305
- type: nauc_ndcg_at_20_max
value: 31.667868318249965
- type: nauc_ndcg_at_20_std
value: -10.65470780066342
- type: nauc_ndcg_at_3_diff1
value: 43.39857166975522
- type: nauc_ndcg_at_3_max
value: 31.764668313577495
- type: nauc_ndcg_at_3_std
value: -14.494866954678152
- type: nauc_ndcg_at_5_diff1
value: 43.16976647347281
- type: nauc_ndcg_at_5_max
value: 29.878329062643143
- type: nauc_ndcg_at_5_std
value: -13.987689089179739
- type: nauc_precision_at_1000_diff1
value: -9.807973252625484
- type: nauc_precision_at_1000_max
value: 26.6279603849494
- type: nauc_precision_at_1000_std
value: 7.113187103520632
- type: nauc_precision_at_100_diff1
value: -4.777149603323976
- type: nauc_precision_at_100_max
value: 31.03410463692187
- type: nauc_precision_at_100_std
value: 10.463144150275435
- type: nauc_precision_at_10_diff1
value: 8.691528703215962
- type: nauc_precision_at_10_max
value: 33.329579434123374
- type: nauc_precision_at_10_std
value: -0.8002015226329403
- type: nauc_precision_at_1_diff1
value: 55.52626805861546
- type: nauc_precision_at_1_max
value: 40.49306809164979
- type: nauc_precision_at_1_std
value: -12.886607701317681
- type: nauc_precision_at_20_diff1
value: 3.4564653474184284
- type: nauc_precision_at_20_max
value: 34.401070158471136
- type: nauc_precision_at_20_std
value: 5.813431200164549
- type: nauc_precision_at_3_diff1
value: 22.463219705462187
- type: nauc_precision_at_3_max
value: 34.77413976546924
- type: nauc_precision_at_3_std
value: -7.083890789741479
- type: nauc_precision_at_5_diff1
value: 14.011006004883154
- type: nauc_precision_at_5_max
value: 35.73655466853702
- type: nauc_precision_at_5_std
value: -2.8395172077771598
- type: nauc_recall_at_1000_diff1
value: 16.478046357391555
- type: nauc_recall_at_1000_max
value: 43.231704288282344
- type: nauc_recall_at_1000_std
value: 38.430684937573645
- type: nauc_recall_at_100_diff1
value: 30.764718344602436
- type: nauc_recall_at_100_max
value: 31.769050487166655
- type: nauc_recall_at_100_std
value: 23.48468311677149
- type: nauc_recall_at_10_diff1
value: 34.47339565324045
- type: nauc_recall_at_10_max
value: 19.054212335800454
- type: nauc_recall_at_10_std
value: -11.039734015330437
- type: nauc_recall_at_1_diff1
value: 47.705150227211725
- type: nauc_recall_at_1_max
value: 15.354189686550129
- type: nauc_recall_at_1_std
value: -14.559819859039067
- type: nauc_recall_at_20_diff1
value: 32.1011474016873
- type: nauc_recall_at_20_max
value: 25.546372988304423
- type: nauc_recall_at_20_std
value: -0.007233471152482897
- type: nauc_recall_at_3_diff1
value: 37.5708138019065
- type: nauc_recall_at_3_max
value: 16.66410785756736
- type: nauc_recall_at_3_std
value: -15.404817020108966
- type: nauc_recall_at_5_diff1
value: 35.714519648479595
- type: nauc_recall_at_5_max
value: 19.02075233009296
- type: nauc_recall_at_5_std
value: -13.180963359760725
- type: ndcg_at_1
value: 55.556000000000004
- type: ndcg_at_10
value: 56.056
- type: ndcg_at_100
value: 62.44
- type: ndcg_at_1000
value: 64.263
- type: ndcg_at_20
value: 58.638999999999996
- type: ndcg_at_3
value: 51.722
- type: ndcg_at_5
value: 52.701
- type: precision_at_1
value: 55.556000000000004
- type: precision_at_10
value: 15.679000000000002
- type: precision_at_100
value: 2.252
- type: precision_at_1000
value: 0.257
- type: precision_at_20
value: 9.02
- type: precision_at_3
value: 34.619
- type: precision_at_5
value: 25.093
- type: recall_at_1
value: 28.666000000000004
- type: recall_at_10
value: 63.717999999999996
- type: recall_at_100
value: 86.938
- type: recall_at_1000
value: 97.603
- type: recall_at_20
value: 71.649
- type: recall_at_3
value: 46.663
- type: recall_at_5
value: 53.313
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 71.74199999999999
- type: map_at_1
value: 41.729
- type: map_at_10
value: 63.168
- type: map_at_100
value: 64.132
- type: map_at_1000
value: 64.199
- type: map_at_20
value: 63.736000000000004
- type: map_at_3
value: 59.826
- type: map_at_5
value: 61.882000000000005
- type: mrr_at_1
value: 83.45712356515868
- type: mrr_at_10
value: 87.850342432719
- type: mrr_at_100
value: 88.0016320691113
- type: mrr_at_1000
value: 88.00576596968136
- type: mrr_at_20
value: 87.94463253190389
- type: mrr_at_3
value: 87.13706954760278
- type: mrr_at_5
value: 87.59419311276136
- type: nauc_map_at_1000_diff1
value: 13.635446621095054
- type: nauc_map_at_1000_max
value: 18.670632529445633
- type: nauc_map_at_1000_std
value: 10.444842636150575
- type: nauc_map_at_100_diff1
value: 13.599262398010783
- type: nauc_map_at_100_max
value: 18.636389405484806
- type: nauc_map_at_100_std
value: 10.460027483576043
- type: nauc_map_at_10_diff1
value: 13.235053919323942
- type: nauc_map_at_10_max
value: 18.252140477080047
- type: nauc_map_at_10_std
value: 9.9075337042203
- type: nauc_map_at_1_diff1
value: 76.51940497836482
- type: nauc_map_at_1_max
value: 51.251419487235474
- type: nauc_map_at_1_std
value: 0.16714896857146574
- type: nauc_map_at_20_diff1
value: 13.4178245722222
- type: nauc_map_at_20_max
value: 18.40988771210718
- type: nauc_map_at_20_std
value: 10.216685163366282
- type: nauc_map_at_3_diff1
value: 13.38370761663418
- type: nauc_map_at_3_max
value: 17.760962555456537
- type: nauc_map_at_3_std
value: 7.15741965624388
- type: nauc_map_at_5_diff1
value: 13.138133309724855
- type: nauc_map_at_5_max
value: 17.871761295251044
- type: nauc_map_at_5_std
value: 8.475147426940074
- type: nauc_mrr_at_1000_diff1
value: 75.82650818891959
- type: nauc_mrr_at_1000_max
value: 53.6736100668434
- type: nauc_mrr_at_1000_std
value: 1.8025016349213916
- type: nauc_mrr_at_100_diff1
value: 75.82530574210111
- type: nauc_mrr_at_100_max
value: 53.68067545829002
- type: nauc_mrr_at_100_std
value: 1.8147470536495791
- type: nauc_mrr_at_10_diff1
value: 75.8330135686799
- type: nauc_mrr_at_10_max
value: 53.78626885349077
- type: nauc_mrr_at_10_std
value: 1.7975782717226636
- type: nauc_mrr_at_1_diff1
value: 76.51940497836482
- type: nauc_mrr_at_1_max
value: 51.251419487235474
- type: nauc_mrr_at_1_std
value: 0.16714896857146574
- type: nauc_mrr_at_20_diff1
value: 75.82783382464166
- type: nauc_mrr_at_20_max
value: 53.68364567043885
- type: nauc_mrr_at_20_std
value: 1.742037904463963
- type: nauc_mrr_at_3_diff1
value: 75.6944609768663
- type: nauc_mrr_at_3_max
value: 53.803941340341666
- type: nauc_mrr_at_3_std
value: 1.1849945458077804
- type: nauc_mrr_at_5_diff1
value: 75.73006960604903
- type: nauc_mrr_at_5_max
value: 53.62223096420106
- type: nauc_mrr_at_5_std
value: 1.6144067563410909
- type: nauc_ndcg_at_1000_diff1
value: 21.58025241642726
- type: nauc_ndcg_at_1000_max
value: 24.675747527001153
- type: nauc_ndcg_at_1000_std
value: 13.075943547492718
- type: nauc_ndcg_at_100_diff1
value: 20.30260137544846
- type: nauc_ndcg_at_100_max
value: 23.757528813872018
- type: nauc_ndcg_at_100_std
value: 13.648994687574062
- type: nauc_ndcg_at_10_diff1
value: 18.995052360997818
- type: nauc_ndcg_at_10_max
value: 22.254260808196037
- type: nauc_ndcg_at_10_std
value: 11.27212390633054
- type: nauc_ndcg_at_1_diff1
value: 76.51940497836482
- type: nauc_ndcg_at_1_max
value: 51.251419487235474
- type: nauc_ndcg_at_1_std
value: 0.16714896857146574
- type: nauc_ndcg_at_20_diff1
value: 19.333742380695757
- type: nauc_ndcg_at_20_max
value: 22.527779834633364
- type: nauc_ndcg_at_20_std
value: 12.161009000707917
- type: nauc_ndcg_at_3_diff1
value: 20.013329040965534
- type: nauc_ndcg_at_3_max
value: 21.99692460311921
- type: nauc_ndcg_at_3_std
value: 6.8076290638386165
- type: nauc_ndcg_at_5_diff1
value: 19.08226315942471
- type: nauc_ndcg_at_5_max
value: 21.71185964294168
- type: nauc_ndcg_at_5_std
value: 8.671911269518214
- type: nauc_precision_at_1000_diff1
value: 2.4462475489446764
- type: nauc_precision_at_1000_max
value: 29.145662064268578
- type: nauc_precision_at_1000_std
value: 49.20704909525856
- type: nauc_precision_at_100_diff1
value: 0.11271196725540299
- type: nauc_precision_at_100_max
value: 17.37584606388067
- type: nauc_precision_at_100_std
value: 34.66099346244071
- type: nauc_precision_at_10_diff1
value: 2.9923183951227825
- type: nauc_precision_at_10_max
value: 14.261884731124264
- type: nauc_precision_at_10_std
value: 18.084188795498378
- type: nauc_precision_at_1_diff1
value: 76.51940497836482
- type: nauc_precision_at_1_max
value: 51.251419487235474
- type: nauc_precision_at_1_std
value: 0.16714896857146574
- type: nauc_precision_at_20_diff1
value: 1.9180293008303761
- type: nauc_precision_at_20_max
value: 13.832269193468512
- type: nauc_precision_at_20_std
value: 21.65284406055607
- type: nauc_precision_at_3_diff1
value: 7.226609484731811
- type: nauc_precision_at_3_max
value: 15.162908526977272
- type: nauc_precision_at_3_std
value: 8.451859972962776
- type: nauc_precision_at_5_diff1
value: 4.705236845538159
- type: nauc_precision_at_5_max
value: 14.022910843582666
- type: nauc_precision_at_5_std
value: 11.777269322821605
- type: nauc_recall_at_1000_diff1
value: 2.446247548945172
- type: nauc_recall_at_1000_max
value: 29.14566206426889
- type: nauc_recall_at_1000_std
value: 49.20704909525879
- type: nauc_recall_at_100_diff1
value: 0.1127119672553316
- type: nauc_recall_at_100_max
value: 17.37584606388062
- type: nauc_recall_at_100_std
value: 34.660993462440686
- type: nauc_recall_at_10_diff1
value: 2.9923183951227927
- type: nauc_recall_at_10_max
value: 14.261884731124299
- type: nauc_recall_at_10_std
value: 18.08418879549837
- type: nauc_recall_at_1_diff1
value: 76.51940497836482
- type: nauc_recall_at_1_max
value: 51.251419487235474
- type: nauc_recall_at_1_std
value: 0.16714896857146574
- type: nauc_recall_at_20_diff1
value: 1.918029300830432
- type: nauc_recall_at_20_max
value: 13.832269193468566
- type: nauc_recall_at_20_std
value: 21.65284406055605
- type: nauc_recall_at_3_diff1
value: 7.226609484731802
- type: nauc_recall_at_3_max
value: 15.162908526977182
- type: nauc_recall_at_3_std
value: 8.451859972962634
- type: nauc_recall_at_5_diff1
value: 4.705236845538197
- type: nauc_recall_at_5_max
value: 14.02291084358265
- type: nauc_recall_at_5_std
value: 11.777269322821638
- type: ndcg_at_1
value: 83.45700000000001
- type: ndcg_at_10
value: 71.74199999999999
- type: ndcg_at_100
value: 75.008
- type: ndcg_at_1000
value: 76.242
- type: ndcg_at_20
value: 73.114
- type: ndcg_at_3
value: 67.128
- type: ndcg_at_5
value: 69.645
- type: precision_at_1
value: 83.45700000000001
- type: precision_at_10
value: 14.747
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.189
- type: precision_at_20
value: 7.8149999999999995
- type: precision_at_3
value: 42.323
- type: precision_at_5
value: 27.381
- type: recall_at_1
value: 41.729
- type: recall_at_10
value: 73.734
- type: recall_at_100
value: 86.502
- type: recall_at_1000
value: 94.60499999999999
- type: recall_at_20
value: 78.14999999999999
- type: recall_at_3
value: 63.483999999999995
- type: recall_at_5
value: 68.45400000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.4904
- type: ap
value: 94.85481918794709
- type: ap_weighted
value: 94.85481918794709
- type: f1
value: 96.4898592305707
- type: f1_weighted
value: 96.4898592305707
- type: main_score
value: 96.4904
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 43.692
- type: map_at_1
value: 23.751
- type: map_at_10
value: 36.553999999999995
- type: map_at_100
value: 37.721
- type: map_at_1000
value: 37.763999999999996
- type: map_at_20
value: 37.289
- type: map_at_3
value: 32.643
- type: map_at_5
value: 34.851
- type: mrr_at_1
value: 24.455587392550143
- type: mrr_at_10
value: 37.18388706963206
- type: mrr_at_100
value: 38.28330737932916
- type: mrr_at_1000
value: 38.32054399710817
- type: mrr_at_20
value: 37.8818001216278
- type: mrr_at_3
value: 33.35721107927405
- type: mrr_at_5
value: 35.52483285577843
- type: nauc_map_at_1000_diff1
value: 36.3576177260684
- type: nauc_map_at_1000_max
value: 7.854511605962703
- type: nauc_map_at_1000_std
value: -17.701121059746878
- type: nauc_map_at_100_diff1
value: 36.356075649230505
- type: nauc_map_at_100_max
value: 7.862168042999533
- type: nauc_map_at_100_std
value: -17.670102459097233
- type: nauc_map_at_10_diff1
value: 36.22122978875574
- type: nauc_map_at_10_max
value: 7.80848606967416
- type: nauc_map_at_10_std
value: -18.3265151386167
- type: nauc_map_at_1_diff1
value: 39.28605466408357
- type: nauc_map_at_1_max
value: 6.20202977590459
- type: nauc_map_at_1_std
value: -15.734334090045026
- type: nauc_map_at_20_diff1
value: 36.33637880909657
- type: nauc_map_at_20_max
value: 7.843437969476022
- type: nauc_map_at_20_std
value: -17.917533363025996
- type: nauc_map_at_3_diff1
value: 36.24864976076741
- type: nauc_map_at_3_max
value: 7.420345251835957
- type: nauc_map_at_3_std
value: -18.71678497722944
- type: nauc_map_at_5_diff1
value: 36.0789619291824
- type: nauc_map_at_5_max
value: 7.7314285669514495
- type: nauc_map_at_5_std
value: -18.748688764538706
- type: nauc_mrr_at_1000_diff1
value: 36.23912675623378
- type: nauc_mrr_at_1000_max
value: 7.690553436255147
- type: nauc_mrr_at_1000_std
value: -17.609526070212304
- type: nauc_mrr_at_100_diff1
value: 36.23782651189002
- type: nauc_mrr_at_100_max
value: 7.70075095171647
- type: nauc_mrr_at_100_std
value: -17.575714144960184
- type: nauc_mrr_at_10_diff1
value: 36.125229472534215
- type: nauc_mrr_at_10_max
value: 7.635472248755658
- type: nauc_mrr_at_10_std
value: -18.208166616511086
- type: nauc_mrr_at_1_diff1
value: 39.20986875554532
- type: nauc_mrr_at_1_max
value: 6.062668487561363
- type: nauc_mrr_at_1_std
value: -16.04130340817602
- type: nauc_mrr_at_20_diff1
value: 36.21207088739667
- type: nauc_mrr_at_20_max
value: 7.699610250145951
- type: nauc_mrr_at_20_std
value: -17.778245221724028
- type: nauc_mrr_at_3_diff1
value: 36.03957583885305
- type: nauc_mrr_at_3_max
value: 7.225515576504581
- type: nauc_mrr_at_3_std
value: -18.74478742943741
- type: nauc_mrr_at_5_diff1
value: 35.969152496648974
- type: nauc_mrr_at_5_max
value: 7.584059789018233
- type: nauc_mrr_at_5_std
value: -18.569374723129332
- type: nauc_ndcg_at_1000_diff1
value: 35.894655529841806
- type: nauc_ndcg_at_1000_max
value: 8.579327424366236
- type: nauc_ndcg_at_1000_std
value: -16.359677367747896
- type: nauc_ndcg_at_100_diff1
value: 35.89861902483983
- type: nauc_ndcg_at_100_max
value: 8.830873623962242
- type: nauc_ndcg_at_100_std
value: -15.173125564722978
- type: nauc_ndcg_at_10_diff1
value: 35.36499811105169
- type: nauc_ndcg_at_10_max
value: 8.449267180956992
- type: nauc_ndcg_at_10_std
value: -18.41978802362402
- type: nauc_ndcg_at_1_diff1
value: 39.15422481210622
- type: nauc_ndcg_at_1_max
value: 6.055515791928331
- type: nauc_ndcg_at_1_std
value: -16.042779610876252
- type: nauc_ndcg_at_20_diff1
value: 35.73402868264468
- type: nauc_ndcg_at_20_max
value: 8.695705518210847
- type: nauc_ndcg_at_20_std
value: -16.7735829470466
- type: nauc_ndcg_at_3_diff1
value: 35.31358242856231
- type: nauc_ndcg_at_3_max
value: 7.645692789058997
- type: nauc_ndcg_at_3_std
value: -19.460003734786874
- type: nauc_ndcg_at_5_diff1
value: 35.05216588927143
- type: nauc_ndcg_at_5_max
value: 8.216690520604715
- type: nauc_ndcg_at_5_std
value: -19.3982054492159
- type: nauc_precision_at_1000_diff1
value: -4.440002625111349
- type: nauc_precision_at_1000_max
value: 7.886988951901723
- type: nauc_precision_at_1000_std
value: 9.88111187048247
- type: nauc_precision_at_100_diff1
value: 15.728286119463325
- type: nauc_precision_at_100_max
value: 13.218650824470654
- type: nauc_precision_at_100_std
value: 16.113245895522553
- type: nauc_precision_at_10_diff1
value: 29.51218489610567
- type: nauc_precision_at_10_max
value: 10.197432401942912
- type: nauc_precision_at_10_std
value: -16.950603431359493
- type: nauc_precision_at_1_diff1
value: 39.15422481210622
- type: nauc_precision_at_1_max
value: 6.055515791928331
- type: nauc_precision_at_1_std
value: -16.042779610876252
- type: nauc_precision_at_20_diff1
value: 27.825993070397338
- type: nauc_precision_at_20_max
value: 11.437632287846007
- type: nauc_precision_at_20_std
value: -7.450353566405601
- type: nauc_precision_at_3_diff1
value: 32.14135556796588
- type: nauc_precision_at_3_max
value: 7.989252443574163
- type: nauc_precision_at_3_std
value: -21.566254595671055
- type: nauc_precision_at_5_diff1
value: 30.68778685307082
- type: nauc_precision_at_5_max
value: 9.332160758499892
- type: nauc_precision_at_5_std
value: -20.928554713448914
- type: nauc_recall_at_1000_diff1
value: 25.00810478716878
- type: nauc_recall_at_1000_max
value: 46.518165765201644
- type: nauc_recall_at_1000_std
value: 61.4734635576085
- type: nauc_recall_at_100_diff1
value: 33.895581318261726
- type: nauc_recall_at_100_max
value: 20.10706035872801
- type: nauc_recall_at_100_std
value: 24.204226584457047
- type: nauc_recall_at_10_diff1
value: 32.363127359576296
- type: nauc_recall_at_10_max
value: 10.729923804989545
- type: nauc_recall_at_10_std
value: -18.1335370184202
- type: nauc_recall_at_1_diff1
value: 39.28605466408357
- type: nauc_recall_at_1_max
value: 6.20202977590459
- type: nauc_recall_at_1_std
value: -15.734334090045026
- type: nauc_recall_at_20_diff1
value: 33.47804003169795
- type: nauc_recall_at_20_max
value: 12.781494765263382
- type: nauc_recall_at_20_std
value: -9.263970132202658
- type: nauc_recall_at_3_diff1
value: 32.71001429428999
- type: nauc_recall_at_3_max
value: 8.353439197382693
- type: nauc_recall_at_3_std
value: -21.235097744366954
- type: nauc_recall_at_5_diff1
value: 31.87451464963415
- type: nauc_recall_at_5_max
value: 9.635051450907305
- type: nauc_recall_at_5_std
value: -21.113235357132794
- type: ndcg_at_1
value: 24.47
- type: ndcg_at_10
value: 43.692
- type: ndcg_at_100
value: 49.211
- type: ndcg_at_1000
value: 50.244
- type: ndcg_at_20
value: 46.278000000000006
- type: ndcg_at_3
value: 35.719
- type: ndcg_at_5
value: 39.652
- type: precision_at_1
value: 24.47
- type: precision_at_10
value: 6.857
- type: precision_at_100
value: 0.9610000000000001
- type: precision_at_1000
value: 0.105
- type: precision_at_20
value: 3.968
- type: precision_at_3
value: 15.181000000000001
- type: precision_at_5
value: 11.117
- type: recall_at_1
value: 23.751
- type: recall_at_10
value: 65.64
- type: recall_at_100
value: 90.967
- type: recall_at_1000
value: 98.738
- type: recall_at_20
value: 75.639
- type: recall_at_3
value: 43.927
- type: recall_at_5
value: 53.366
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 98.82580939352485
- type: f1
value: 98.75201754333801
- type: f1_weighted
value: 98.82795205108245
- type: main_score
value: 98.82580939352485
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 92.29822161422709
- type: f1
value: 77.75210224871594
- type: f1_weighted
value: 93.58661422540348
- type: main_score
value: 92.29822161422709
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 85.17484868863484
- type: f1
value: 81.94484244487094
- type: f1_weighted
value: 85.21022593423332
- type: main_score
value: 85.17484868863484
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 89.61667787491594
- type: f1
value: 89.02701927621264
- type: f1_weighted
value: 89.56306982022801
- type: main_score
value: 89.61667787491594
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 46.318282423948574
- type: v_measure
value: 46.318282423948574
- type: v_measure_std
value: 0.9729055662461538
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 44.29033625273981
- type: v_measure
value: 44.29033625273981
- type: v_measure_std
value: 1.0596383629128594
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 33.0526129239962
- type: map
value: 33.0526129239962
- type: mrr
value: 34.29260046890935
- type: nAUC_map_diff1
value: 12.579738077238032
- type: nAUC_map_max
value: -20.936629344962
- type: nAUC_map_std
value: -1.6096805784945216
- type: nAUC_mrr_diff1
value: 11.597584463580807
- type: nAUC_mrr_max
value: -15.723702838537504
- type: nAUC_mrr_std
value: 0.2719172965777737
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 41.486000000000004
- type: map_at_1
value: 6.866
- type: map_at_10
value: 15.895999999999999
- type: map_at_100
value: 21.093
- type: map_at_1000
value: 23.067
- type: map_at_20
value: 18.125
- type: map_at_3
value: 11.421000000000001
- type: map_at_5
value: 13.415
- type: mrr_at_1
value: 52.63157894736842
- type: mrr_at_10
value: 61.486805248415166
- type: mrr_at_100
value: 62.08211009182091
- type: mrr_at_1000
value: 62.10828701365016
- type: mrr_at_20
value: 61.904411187915784
- type: mrr_at_3
value: 59.90712074303407
- type: mrr_at_5
value: 60.91331269349847
- type: nauc_map_at_1000_diff1
value: 25.484625278529403
- type: nauc_map_at_1000_max
value: 31.206600396418853
- type: nauc_map_at_1000_std
value: 15.569448072357156
- type: nauc_map_at_100_diff1
value: 27.636750226316764
- type: nauc_map_at_100_max
value: 29.66992681250722
- type: nauc_map_at_100_std
value: 10.570600484002671
- type: nauc_map_at_10_diff1
value: 32.76642525548697
- type: nauc_map_at_10_max
value: 21.459225397237663
- type: nauc_map_at_10_std
value: -3.546494734209264
- type: nauc_map_at_1_diff1
value: 48.8002894871328
- type: nauc_map_at_1_max
value: 5.7236722609868815
- type: nauc_map_at_1_std
value: -13.283554044471352
- type: nauc_map_at_20_diff1
value: 30.57169701502308
- type: nauc_map_at_20_max
value: 25.79666139518404
- type: nauc_map_at_20_std
value: 1.781732492989651
- type: nauc_map_at_3_diff1
value: 40.076315947201095
- type: nauc_map_at_3_max
value: 12.862524429140054
- type: nauc_map_at_3_std
value: -9.188349777126817
- type: nauc_map_at_5_diff1
value: 36.9918718052938
- type: nauc_map_at_5_max
value: 16.74234374361876
- type: nauc_map_at_5_std
value: -7.818523349307494
- type: nauc_mrr_at_1000_diff1
value: 26.88183002609805
- type: nauc_mrr_at_1000_max
value: 47.10209348428658
- type: nauc_mrr_at_1000_std
value: 32.067825924992924
- type: nauc_mrr_at_100_diff1
value: 26.871482491566745
- type: nauc_mrr_at_100_max
value: 47.11303868498556
- type: nauc_mrr_at_100_std
value: 32.08961428818868
- type: nauc_mrr_at_10_diff1
value: 26.6356914977722
- type: nauc_mrr_at_10_max
value: 47.091624558810366
- type: nauc_mrr_at_10_std
value: 31.942424120660164
- type: nauc_mrr_at_1_diff1
value: 28.19774198483673
- type: nauc_mrr_at_1_max
value: 41.44380927834253
- type: nauc_mrr_at_1_std
value: 25.18222691885917
- type: nauc_mrr_at_20_diff1
value: 26.86487347109452
- type: nauc_mrr_at_20_max
value: 47.1987778214726
- type: nauc_mrr_at_20_std
value: 32.143517921610034
- type: nauc_mrr_at_3_diff1
value: 27.34340373236422
- type: nauc_mrr_at_3_max
value: 46.358726506276646
- type: nauc_mrr_at_3_std
value: 31.74924155572593
- type: nauc_mrr_at_5_diff1
value: 27.209667205060672
- type: nauc_mrr_at_5_max
value: 46.79883369072009
- type: nauc_mrr_at_5_std
value: 31.655605306670758
- type: nauc_ndcg_at_1000_diff1
value: 18.940195769769687
- type: nauc_ndcg_at_1000_max
value: 46.48551313937331
- type: nauc_ndcg_at_1000_std
value: 33.64819502089232
- type: nauc_ndcg_at_100_diff1
value: 19.50885253809146
- type: nauc_ndcg_at_100_max
value: 40.53174462354878
- type: nauc_ndcg_at_100_std
value: 28.516152877751118
- type: nauc_ndcg_at_10_diff1
value: 16.01699218096564
- type: nauc_ndcg_at_10_max
value: 41.17322878314514
- type: nauc_ndcg_at_10_std
value: 29.002233224832196
- type: nauc_ndcg_at_1_diff1
value: 27.443547710102205
- type: nauc_ndcg_at_1_max
value: 40.66529763309582
- type: nauc_ndcg_at_1_std
value: 24.15016766225869
- type: nauc_ndcg_at_20_diff1
value: 17.541197675685062
- type: nauc_ndcg_at_20_max
value: 40.53231266973844
- type: nauc_ndcg_at_20_std
value: 29.54096347876548
- type: nauc_ndcg_at_3_diff1
value: 18.649628357473716
- type: nauc_ndcg_at_3_max
value: 41.18603570171764
- type: nauc_ndcg_at_3_std
value: 27.125524188420396
- type: nauc_ndcg_at_5_diff1
value: 17.519593751448483
- type: nauc_ndcg_at_5_max
value: 42.715997890377345
- type: nauc_ndcg_at_5_std
value: 27.902627839899868
- type: nauc_precision_at_1000_diff1
value: -15.528797630565155
- type: nauc_precision_at_1000_max
value: 13.741640921778671
- type: nauc_precision_at_1000_std
value: 44.50896053788372
- type: nauc_precision_at_100_diff1
value: -14.491464489721887
- type: nauc_precision_at_100_max
value: 23.136434418999457
- type: nauc_precision_at_100_std
value: 49.73145147863128
- type: nauc_precision_at_10_diff1
value: -4.829188942994277
- type: nauc_precision_at_10_max
value: 40.327612559528866
- type: nauc_precision_at_10_std
value: 39.34919529635044
- type: nauc_precision_at_1_diff1
value: 28.19774198483673
- type: nauc_precision_at_1_max
value: 41.44380927834253
- type: nauc_precision_at_1_std
value: 25.18222691885917
- type: nauc_precision_at_20_diff1
value: -7.210726293112847
- type: nauc_precision_at_20_max
value: 37.195679576636984
- type: nauc_precision_at_20_std
value: 45.4597096418357
- type: nauc_precision_at_3_diff1
value: 7.578219537774854
- type: nauc_precision_at_3_max
value: 41.59775233475654
- type: nauc_precision_at_3_std
value: 30.764584790895118
- type: nauc_precision_at_5_diff1
value: 1.655451789039598
- type: nauc_precision_at_5_max
value: 43.435739407610455
- type: nauc_precision_at_5_std
value: 33.42552263325999
- type: nauc_recall_at_1000_diff1
value: 5.030705700690516
- type: nauc_recall_at_1000_max
value: 19.108072570815583
- type: nauc_recall_at_1000_std
value: 14.697734974217308
- type: nauc_recall_at_100_diff1
value: 14.746540318132407
- type: nauc_recall_at_100_max
value: 21.798705033854795
- type: nauc_recall_at_100_std
value: 11.416195108842587
- type: nauc_recall_at_10_diff1
value: 25.548642427860486
- type: nauc_recall_at_10_max
value: 18.711677681987474
- type: nauc_recall_at_10_std
value: -5.988904818971677
- type: nauc_recall_at_1_diff1
value: 48.8002894871328
- type: nauc_recall_at_1_max
value: 5.7236722609868815
- type: nauc_recall_at_1_std
value: -13.283554044471352
- type: nauc_recall_at_20_diff1
value: 23.39140739154809
- type: nauc_recall_at_20_max
value: 19.351150636155474
- type: nauc_recall_at_20_std
value: -2.757280266915132
- type: nauc_recall_at_3_diff1
value: 38.17453576012812
- type: nauc_recall_at_3_max
value: 13.47003839643972
- type: nauc_recall_at_3_std
value: -8.75780163862688
- type: nauc_recall_at_5_diff1
value: 33.02812855226899
- type: nauc_recall_at_5_max
value: 15.477626408978477
- type: nauc_recall_at_5_std
value: -9.072206441070708
- type: ndcg_at_1
value: 50.773999999999994
- type: ndcg_at_10
value: 41.486000000000004
- type: ndcg_at_100
value: 39.051
- type: ndcg_at_1000
value: 48.106
- type: ndcg_at_20
value: 39.432
- type: ndcg_at_3
value: 47.428
- type: ndcg_at_5
value: 45.227000000000004
- type: precision_at_1
value: 52.632
- type: precision_at_10
value: 31.146
- type: precision_at_100
value: 10.328
- type: precision_at_1000
value: 2.432
- type: precision_at_20
value: 23.793
- type: precision_at_3
value: 45.201
- type: precision_at_5
value: 39.876
- type: recall_at_1
value: 6.866
- type: recall_at_10
value: 20.447000000000003
- type: recall_at_100
value: 40.607
- type: recall_at_1000
value: 73.411
- type: recall_at_20
value: 26.082
- type: recall_at_3
value: 12.484
- type: recall_at_5
value: 15.847
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 69.072
- type: map_at_1
value: 45.483000000000004
- type: map_at_10
value: 62.050000000000004
- type: map_at_100
value: 62.693
- type: map_at_1000
value: 62.702999999999996
- type: map_at_20
value: 62.498
- type: map_at_3
value: 58.285
- type: map_at_5
value: 60.711000000000006
- type: mrr_at_1
value: 50.840092699884124
- type: mrr_at_10
value: 64.54635224116673
- type: mrr_at_100
value: 64.9526548702289
- type: mrr_at_1000
value: 64.95908460752281
- type: mrr_at_20
value: 64.82949565799959
- type: mrr_at_3
value: 61.89165701042856
- type: mrr_at_5
value: 63.632676709154026
- type: nauc_map_at_1000_diff1
value: 43.187285304185224
- type: nauc_map_at_1000_max
value: 32.39921659632756
- type: nauc_map_at_1000_std
value: -5.780901333066553
- type: nauc_map_at_100_diff1
value: 43.184487221204456
- type: nauc_map_at_100_max
value: 32.41176116347982
- type: nauc_map_at_100_std
value: -5.76422606662383
- type: nauc_map_at_10_diff1
value: 42.967066814031746
- type: nauc_map_at_10_max
value: 32.489617364418514
- type: nauc_map_at_10_std
value: -6.029045531102664
- type: nauc_map_at_1_diff1
value: 46.16376563218624
- type: nauc_map_at_1_max
value: 26.342624776802232
- type: nauc_map_at_1_std
value: -7.142171388751972
- type: nauc_map_at_20_diff1
value: 43.15894358608328
- type: nauc_map_at_20_max
value: 32.46492198956245
- type: nauc_map_at_20_std
value: -5.788373305449195
- type: nauc_map_at_3_diff1
value: 43.231752344608545
- type: nauc_map_at_3_max
value: 31.68003009949564
- type: nauc_map_at_3_std
value: -8.015235132765458
- type: nauc_map_at_5_diff1
value: 42.86197608819917
- type: nauc_map_at_5_max
value: 32.363857571094485
- type: nauc_map_at_5_std
value: -6.780487416387977
- type: nauc_mrr_at_1000_diff1
value: 43.40542912045782
- type: nauc_mrr_at_1000_max
value: 32.8461770324533
- type: nauc_mrr_at_1000_std
value: -3.6505425530008204
- type: nauc_mrr_at_100_diff1
value: 43.40233508014468
- type: nauc_mrr_at_100_max
value: 32.85598538385942
- type: nauc_mrr_at_100_std
value: -3.637477352635459
- type: nauc_mrr_at_10_diff1
value: 43.260179162806054
- type: nauc_mrr_at_10_max
value: 32.942643527040474
- type: nauc_mrr_at_10_std
value: -3.712052825320437
- type: nauc_mrr_at_1_diff1
value: 46.354919460881206
- type: nauc_mrr_at_1_max
value: 29.1760258591106
- type: nauc_mrr_at_1_std
value: -4.107225031227406
- type: nauc_mrr_at_20_diff1
value: 43.37092385434311
- type: nauc_mrr_at_20_max
value: 32.93390254712846
- type: nauc_mrr_at_20_std
value: -3.5719056112132006
- type: nauc_mrr_at_3_diff1
value: 43.1744474040527
- type: nauc_mrr_at_3_max
value: 32.741290559777994
- type: nauc_mrr_at_3_std
value: -4.72677925120697
- type: nauc_mrr_at_5_diff1
value: 43.108396819975674
- type: nauc_mrr_at_5_max
value: 32.970519514893084
- type: nauc_mrr_at_5_std
value: -4.090906158975974
- type: nauc_ndcg_at_1000_diff1
value: 42.786664193638714
- type: nauc_ndcg_at_1000_max
value: 33.65554095609296
- type: nauc_ndcg_at_1000_std
value: -4.024030130584482
- type: nauc_ndcg_at_100_diff1
value: 42.691246775210814
- type: nauc_ndcg_at_100_max
value: 34.063232335110875
- type: nauc_ndcg_at_100_std
value: -3.477813807415248
- type: nauc_ndcg_at_10_diff1
value: 41.90988990571757
- type: nauc_ndcg_at_10_max
value: 34.58934812881633
- type: nauc_ndcg_at_10_std
value: -4.3295110195497655
- type: nauc_ndcg_at_1_diff1
value: 46.354919460881206
- type: nauc_ndcg_at_1_max
value: 29.1760258591106
- type: nauc_ndcg_at_1_std
value: -4.107225031227406
- type: nauc_ndcg_at_20_diff1
value: 42.493206675867114
- type: nauc_ndcg_at_20_max
value: 34.562441307459544
- type: nauc_ndcg_at_20_std
value: -3.4456116866749107
- type: nauc_ndcg_at_3_diff1
value: 42.24180336502808
- type: nauc_ndcg_at_3_max
value: 33.064267018100594
- type: nauc_ndcg_at_3_std
value: -7.786248093572142
- type: nauc_ndcg_at_5_diff1
value: 41.692714787779565
- type: nauc_ndcg_at_5_max
value: 34.20502498949156
- type: nauc_ndcg_at_5_std
value: -5.979557859282785
- type: nauc_precision_at_1000_diff1
value: -13.779832506640702
- type: nauc_precision_at_1000_max
value: 1.243001688631421
- type: nauc_precision_at_1000_std
value: 17.351623398622323
- type: nauc_precision_at_100_diff1
value: -11.310526816290297
- type: nauc_precision_at_100_max
value: 5.771669506192959
- type: nauc_precision_at_100_std
value: 19.917795079540113
- type: nauc_precision_at_10_diff1
value: 2.163699384635286
- type: nauc_precision_at_10_max
value: 19.66440698458386
- type: nauc_precision_at_10_std
value: 13.689876348315726
- type: nauc_precision_at_1_diff1
value: 46.354919460881206
- type: nauc_precision_at_1_max
value: 29.1760258591106
- type: nauc_precision_at_1_std
value: -4.107225031227406
- type: nauc_precision_at_20_diff1
value: -3.038735879584471
- type: nauc_precision_at_20_max
value: 14.132968299701695
- type: nauc_precision_at_20_std
value: 17.78069734664346
- type: nauc_precision_at_3_diff1
value: 21.783760758070095
- type: nauc_precision_at_3_max
value: 30.244127986404497
- type: nauc_precision_at_3_std
value: -0.12411163467738723
- type: nauc_precision_at_5_diff1
value: 10.980635723302418
- type: nauc_precision_at_5_max
value: 25.302293738975575
- type: nauc_precision_at_5_std
value: 6.4740817488722024
- type: nauc_recall_at_1000_diff1
value: 34.10343772356593
- type: nauc_recall_at_1000_max
value: 80.72497340357538
- type: nauc_recall_at_1000_std
value: 69.54564103264093
- type: nauc_recall_at_100_diff1
value: 33.427719956774126
- type: nauc_recall_at_100_max
value: 71.54086768335449
- type: nauc_recall_at_100_std
value: 49.66157377654885
- type: nauc_recall_at_10_diff1
value: 33.70139560054039
- type: nauc_recall_at_10_max
value: 45.47878072860151
- type: nauc_recall_at_10_std
value: 1.4188516615716378
- type: nauc_recall_at_1_diff1
value: 46.16376563218624
- type: nauc_recall_at_1_max
value: 26.342624776802232
- type: nauc_recall_at_1_std
value: -7.142171388751972
- type: nauc_recall_at_20_diff1
value: 35.805379874970086
- type: nauc_recall_at_20_max
value: 51.80479822253392
- type: nauc_recall_at_20_std
value: 13.531467576460143
- type: nauc_recall_at_3_diff1
value: 37.288500141631616
- type: nauc_recall_at_3_max
value: 35.07078243516728
- type: nauc_recall_at_3_std
value: -10.452926441410405
- type: nauc_recall_at_5_diff1
value: 34.83186104526897
- type: nauc_recall_at_5_max
value: 39.58488976496973
- type: nauc_recall_at_5_std
value: -6.3049292065708835
- type: ndcg_at_1
value: 50.839999999999996
- type: ndcg_at_10
value: 69.072
- type: ndcg_at_100
value: 71.538
- type: ndcg_at_1000
value: 71.77799999999999
- type: ndcg_at_20
value: 70.41
- type: ndcg_at_3
value: 62.544999999999995
- type: ndcg_at_5
value: 66.33099999999999
- type: precision_at_1
value: 50.839999999999996
- type: precision_at_10
value: 10.495000000000001
- type: precision_at_100
value: 1.1900000000000002
- type: precision_at_1000
value: 0.121
- type: precision_at_20
value: 5.5809999999999995
- type: precision_at_3
value: 27.636
- type: precision_at_5
value: 18.864
- type: recall_at_1
value: 45.483000000000004
- type: recall_at_10
value: 87.483
- type: recall_at_100
value: 97.844
- type: recall_at_1000
value: 99.66199999999999
- type: recall_at_20
value: 92.294
- type: recall_at_3
value: 71.2
- type: recall_at_5
value: 79.753
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 89.58
- type: map_at_1
value: 71.819
- type: map_at_10
value: 86.04899999999999
- type: map_at_100
value: 86.648
- type: map_at_1000
value: 86.66199999999999
- type: map_at_20
value: 86.441
- type: map_at_3
value: 83.114
- type: map_at_5
value: 84.981
- type: mrr_at_1
value: 82.62
- type: mrr_at_10
value: 88.62899999999979
- type: mrr_at_100
value: 88.70918591324215
- type: mrr_at_1000
value: 88.70973091492397
- type: mrr_at_20
value: 88.68914765317221
- type: mrr_at_3
value: 87.74999999999979
- type: mrr_at_5
value: 88.36799999999974
- type: nauc_map_at_1000_diff1
value: 77.89207709760448
- type: nauc_map_at_1000_max
value: 29.63371361495422
- type: nauc_map_at_1000_std
value: -48.628180385874344
- type: nauc_map_at_100_diff1
value: 77.89592179104915
- type: nauc_map_at_100_max
value: 29.617171506130756
- type: nauc_map_at_100_std
value: -48.66057170774648
- type: nauc_map_at_10_diff1
value: 78.0618161228185
- type: nauc_map_at_10_max
value: 29.178490609366737
- type: nauc_map_at_10_std
value: -50.74755004592002
- type: nauc_map_at_1_diff1
value: 81.64335579973574
- type: nauc_map_at_1_max
value: 21.813832226652174
- type: nauc_map_at_1_std
value: -42.57570978190876
- type: nauc_map_at_20_diff1
value: 77.9299081005938
- type: nauc_map_at_20_max
value: 29.458718470003888
- type: nauc_map_at_20_std
value: -49.63337236763102
- type: nauc_map_at_3_diff1
value: 78.72941448509229
- type: nauc_map_at_3_max
value: 26.600997896960056
- type: nauc_map_at_3_std
value: -51.889002227479885
- type: nauc_map_at_5_diff1
value: 78.31466610917171
- type: nauc_map_at_5_max
value: 28.09863984582896
- type: nauc_map_at_5_std
value: -52.14058096096497
- type: nauc_mrr_at_1000_diff1
value: 78.42667263739992
- type: nauc_mrr_at_1000_max
value: 31.98996235127974
- type: nauc_mrr_at_1000_std
value: -44.380439148429296
- type: nauc_mrr_at_100_diff1
value: 78.42661032698115
- type: nauc_mrr_at_100_max
value: 31.991652631740102
- type: nauc_mrr_at_100_std
value: -44.37854108460535
- type: nauc_mrr_at_10_diff1
value: 78.39126022544136
- type: nauc_mrr_at_10_max
value: 32.02023484451197
- type: nauc_mrr_at_10_std
value: -44.561252349176954
- type: nauc_mrr_at_1_diff1
value: 79.21630894647448
- type: nauc_mrr_at_1_max
value: 31.526303156060177
- type: nauc_mrr_at_1_std
value: -41.887504422443136
- type: nauc_mrr_at_20_diff1
value: 78.42548039170424
- type: nauc_mrr_at_20_max
value: 31.99588275070137
- type: nauc_mrr_at_20_std
value: -44.44957722627042
- type: nauc_mrr_at_3_diff1
value: 78.26165151833735
- type: nauc_mrr_at_3_max
value: 32.18028826126801
- type: nauc_mrr_at_3_std
value: -44.6998237213182
- type: nauc_mrr_at_5_diff1
value: 78.34786430903962
- type: nauc_mrr_at_5_max
value: 32.168476272879566
- type: nauc_mrr_at_5_std
value: -44.7915919956712
- type: nauc_ndcg_at_1000_diff1
value: 77.79198355957816
- type: nauc_ndcg_at_1000_max
value: 31.14363511518406
- type: nauc_ndcg_at_1000_std
value: -46.69335151274275
- type: nauc_ndcg_at_100_diff1
value: 77.79898090286419
- type: nauc_ndcg_at_100_max
value: 31.115103811629215
- type: nauc_ndcg_at_100_std
value: -46.73078913421965
- type: nauc_ndcg_at_10_diff1
value: 77.74856635461343
- type: nauc_ndcg_at_10_max
value: 30.279584686212747
- type: nauc_ndcg_at_10_std
value: -50.23514662356807
- type: nauc_ndcg_at_1_diff1
value: 79.17833000040999
- type: nauc_ndcg_at_1_max
value: 31.703788144510746
- type: nauc_ndcg_at_1_std
value: -41.854817402870715
- type: nauc_ndcg_at_20_diff1
value: 77.7380353804671
- type: nauc_ndcg_at_20_max
value: 30.622294129001553
- type: nauc_ndcg_at_20_std
value: -49.035794761065254
- type: nauc_ndcg_at_3_diff1
value: 77.41476880573593
- type: nauc_ndcg_at_3_max
value: 29.015949978243032
- type: nauc_ndcg_at_3_std
value: -49.78627087622648
- type: nauc_ndcg_at_5_diff1
value: 77.64439137502896
- type: nauc_ndcg_at_5_max
value: 29.444684897492206
- type: nauc_ndcg_at_5_std
value: -51.21908400252501
- type: nauc_precision_at_1000_diff1
value: -44.92396459446822
- type: nauc_precision_at_1000_max
value: -3.674153720989045
- type: nauc_precision_at_1000_std
value: 39.56552468277785
- type: nauc_precision_at_100_diff1
value: -44.75143023259094
- type: nauc_precision_at_100_max
value: -3.705280025140011
- type: nauc_precision_at_100_std
value: 39.433619999113326
- type: nauc_precision_at_10_diff1
value: -41.0651074726579
- type: nauc_precision_at_10_max
value: -0.21097985601783667
- type: nauc_precision_at_10_std
value: 26.24652824589493
- type: nauc_precision_at_1_diff1
value: 79.17833000040999
- type: nauc_precision_at_1_max
value: 31.703788144510746
- type: nauc_precision_at_1_std
value: -41.854817402870715
- type: nauc_precision_at_20_diff1
value: -43.368001340920294
- type: nauc_precision_at_20_max
value: -2.036990010399129
- type: nauc_precision_at_20_std
value: 32.37747041406297
- type: nauc_precision_at_3_diff1
value: -22.089307548346877
- type: nauc_precision_at_3_max
value: 6.2280973175296
- type: nauc_precision_at_3_std
value: 5.323992514036145
- type: nauc_precision_at_5_diff1
value: -34.07115055244003
- type: nauc_precision_at_5_max
value: 2.5955315789198834
- type: nauc_precision_at_5_std
value: 16.26096689407332
- type: nauc_recall_at_1000_diff1
value: 58.27703860947467
- type: nauc_recall_at_1000_max
value: 68.59835835315768
- type: nauc_recall_at_1000_std
value: 77.96687006056064
- type: nauc_recall_at_100_diff1
value: 73.24371223081737
- type: nauc_recall_at_100_max
value: 39.55925344664591
- type: nauc_recall_at_100_std
value: -32.25605030215798
- type: nauc_recall_at_10_diff1
value: 73.41261201339202
- type: nauc_recall_at_10_max
value: 26.822979434062926
- type: nauc_recall_at_10_std
value: -74.2909332592806
- type: nauc_recall_at_1_diff1
value: 81.64335579973574
- type: nauc_recall_at_1_max
value: 21.813832226652174
- type: nauc_recall_at_1_std
value: -42.57570978190876
- type: nauc_recall_at_20_diff1
value: 72.7621297920656
- type: nauc_recall_at_20_max
value: 26.02492304096079
- type: nauc_recall_at_20_std
value: -77.8724532438279
- type: nauc_recall_at_3_diff1
value: 75.25149312810714
- type: nauc_recall_at_3_max
value: 23.20545662481487
- type: nauc_recall_at_3_std
value: -59.69689982140521
- type: nauc_recall_at_5_diff1
value: 73.69807273001406
- type: nauc_recall_at_5_max
value: 24.073666798066057
- type: nauc_recall_at_5_std
value: -67.91121268130719
- type: ndcg_at_1
value: 82.64
- type: ndcg_at_10
value: 89.58
- type: ndcg_at_100
value: 90.606
- type: ndcg_at_1000
value: 90.676
- type: ndcg_at_20
value: 90.132
- type: ndcg_at_3
value: 86.88
- type: ndcg_at_5
value: 88.40299999999999
- type: precision_at_1
value: 82.64
- type: precision_at_10
value: 13.604
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.188
- type: precision_at_3
value: 38.083
- type: precision_at_5
value: 25.018
- type: recall_at_1
value: 71.819
- type: recall_at_10
value: 96.34700000000001
- type: recall_at_100
value: 99.715
- type: recall_at_1000
value: 99.995
- type: recall_at_20
value: 98.073
- type: recall_at_3
value: 88.57300000000001
- type: recall_at_5
value: 92.908
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 71.18966762070158
- type: v_measure
value: 71.18966762070158
- type: v_measure_std
value: 2.7498969054457048
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 74.42014716862516
- type: v_measure
value: 74.42014716862516
- type: v_measure_std
value: 9.909739891410648
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 25.041999999999998
- type: map_at_1
value: 5.893000000000001
- type: map_at_10
value: 15.260000000000002
- type: map_at_100
value: 18.084
- type: map_at_1000
value: 18.467
- type: map_at_20
value: 16.675
- type: map_at_3
value: 10.526
- type: map_at_5
value: 12.775
- type: mrr_at_1
value: 28.999999999999996
- type: mrr_at_10
value: 41.03575396825395
- type: mrr_at_100
value: 42.136771862785835
- type: mrr_at_1000
value: 42.16698555415099
- type: mrr_at_20
value: 41.707493696104315
- type: mrr_at_3
value: 37.34999999999998
- type: mrr_at_5
value: 39.59999999999995
- type: nauc_map_at_1000_diff1
value: 12.080002654911883
- type: nauc_map_at_1000_max
value: 29.813563682286276
- type: nauc_map_at_1000_std
value: 20.36659817908673
- type: nauc_map_at_100_diff1
value: 12.108735517749706
- type: nauc_map_at_100_max
value: 29.76830671710955
- type: nauc_map_at_100_std
value: 20.3433621032846
- type: nauc_map_at_10_diff1
value: 12.91575031185637
- type: nauc_map_at_10_max
value: 29.427600958386318
- type: nauc_map_at_10_std
value: 16.89867275177153
- type: nauc_map_at_1_diff1
value: 19.353069488987916
- type: nauc_map_at_1_max
value: 17.093914951159693
- type: nauc_map_at_1_std
value: 8.19886078055046
- type: nauc_map_at_20_diff1
value: 11.977233457943113
- type: nauc_map_at_20_max
value: 29.171812822948805
- type: nauc_map_at_20_std
value: 18.780517506173965
- type: nauc_map_at_3_diff1
value: 14.453129464176092
- type: nauc_map_at_3_max
value: 25.801958649112077
- type: nauc_map_at_3_std
value: 11.572823684429643
- type: nauc_map_at_5_diff1
value: 13.167155808104997
- type: nauc_map_at_5_max
value: 27.355626948365792
- type: nauc_map_at_5_std
value: 14.414151839192183
- type: nauc_mrr_at_1000_diff1
value: 17.262104643988636
- type: nauc_mrr_at_1000_max
value: 23.991373837217058
- type: nauc_mrr_at_1000_std
value: 12.44755488671623
- type: nauc_mrr_at_100_diff1
value: 17.267280132318703
- type: nauc_mrr_at_100_max
value: 24.022189287889294
- type: nauc_mrr_at_100_std
value: 12.480695500214788
- type: nauc_mrr_at_10_diff1
value: 17.012383998246268
- type: nauc_mrr_at_10_max
value: 24.192637911171722
- type: nauc_mrr_at_10_std
value: 12.524608847408917
- type: nauc_mrr_at_1_diff1
value: 19.43518811038007
- type: nauc_mrr_at_1_max
value: 17.747482933395602
- type: nauc_mrr_at_1_std
value: 8.410779775558684
- type: nauc_mrr_at_20_diff1
value: 17.202663281407446
- type: nauc_mrr_at_20_max
value: 24.091991130543118
- type: nauc_mrr_at_20_std
value: 12.503814263019908
- type: nauc_mrr_at_3_diff1
value: 17.52733013432995
- type: nauc_mrr_at_3_max
value: 23.569459518780214
- type: nauc_mrr_at_3_std
value: 11.770846827520726
- type: nauc_mrr_at_5_diff1
value: 17.10817561975543
- type: nauc_mrr_at_5_max
value: 23.945141435234678
- type: nauc_mrr_at_5_std
value: 12.034468615317719
- type: nauc_ndcg_at_1000_diff1
value: 12.317811393346936
- type: nauc_ndcg_at_1000_max
value: 30.809991350156103
- type: nauc_ndcg_at_1000_std
value: 24.517501065205067
- type: nauc_ndcg_at_100_diff1
value: 12.824804203182936
- type: nauc_ndcg_at_100_max
value: 30.895499817010748
- type: nauc_ndcg_at_100_std
value: 25.424376279745402
- type: nauc_ndcg_at_10_diff1
value: 13.32724552457439
- type: nauc_ndcg_at_10_max
value: 30.409088666807456
- type: nauc_ndcg_at_10_std
value: 18.216330475714113
- type: nauc_ndcg_at_1_diff1
value: 19.43518811038007
- type: nauc_ndcg_at_1_max
value: 17.747482933395602
- type: nauc_ndcg_at_1_std
value: 8.410779775558684
- type: nauc_ndcg_at_20_diff1
value: 12.224399111852902
- type: nauc_ndcg_at_20_max
value: 29.86352330445272
- type: nauc_ndcg_at_20_std
value: 21.196937851331807
- type: nauc_ndcg_at_3_diff1
value: 15.367489533734027
- type: nauc_ndcg_at_3_max
value: 26.76486390741532
- type: nauc_ndcg_at_3_std
value: 12.606077508789923
- type: nauc_ndcg_at_5_diff1
value: 13.831157482390935
- type: nauc_ndcg_at_5_max
value: 28.070226983968904
- type: nauc_ndcg_at_5_std
value: 15.236787943125435
- type: nauc_precision_at_1000_diff1
value: 0.016122957101357048
- type: nauc_precision_at_1000_max
value: 24.380929903557334
- type: nauc_precision_at_1000_std
value: 34.54045112720052
- type: nauc_precision_at_100_diff1
value: 7.255224788507301
- type: nauc_precision_at_100_max
value: 27.98453788447542
- type: nauc_precision_at_100_std
value: 35.38999555441665
- type: nauc_precision_at_10_diff1
value: 9.69185099834181
- type: nauc_precision_at_10_max
value: 32.532315522580454
- type: nauc_precision_at_10_std
value: 21.48948348473612
- type: nauc_precision_at_1_diff1
value: 19.43518811038007
- type: nauc_precision_at_1_max
value: 17.747482933395602
- type: nauc_precision_at_1_std
value: 8.410779775558684
- type: nauc_precision_at_20_diff1
value: 6.964076536695672
- type: nauc_precision_at_20_max
value: 29.30087236410044
- type: nauc_precision_at_20_std
value: 26.413625895571986
- type: nauc_precision_at_3_diff1
value: 14.145134359925155
- type: nauc_precision_at_3_max
value: 29.915650960808303
- type: nauc_precision_at_3_std
value: 14.095370019867797
- type: nauc_precision_at_5_diff1
value: 11.043933558522692
- type: nauc_precision_at_5_max
value: 30.93016505807111
- type: nauc_precision_at_5_std
value: 17.749256196062603
- type: nauc_recall_at_1000_diff1
value: -0.7776817772090345
- type: nauc_recall_at_1000_max
value: 23.094717340324518
- type: nauc_recall_at_1000_std
value: 37.189908681396425
- type: nauc_recall_at_100_diff1
value: 6.887748742013364
- type: nauc_recall_at_100_max
value: 27.00798435230277
- type: nauc_recall_at_100_std
value: 35.908147807345344
- type: nauc_recall_at_10_diff1
value: 9.605632017480751
- type: nauc_recall_at_10_max
value: 31.845202901168655
- type: nauc_recall_at_10_std
value: 21.497414586634683
- type: nauc_recall_at_1_diff1
value: 19.353069488987916
- type: nauc_recall_at_1_max
value: 17.093914951159693
- type: nauc_recall_at_1_std
value: 8.19886078055046
- type: nauc_recall_at_20_diff1
value: 6.927503731844782
- type: nauc_recall_at_20_max
value: 28.611698183338202
- type: nauc_recall_at_20_std
value: 26.69018660149911
- type: nauc_recall_at_3_diff1
value: 14.043724087062268
- type: nauc_recall_at_3_max
value: 29.269835821380465
- type: nauc_recall_at_3_std
value: 14.104419605998094
- type: nauc_recall_at_5_diff1
value: 11.017319452873336
- type: nauc_recall_at_5_max
value: 30.295720628306228
- type: nauc_recall_at_5_std
value: 17.758048545573825
- type: ndcg_at_1
value: 28.999999999999996
- type: ndcg_at_10
value: 25.041999999999998
- type: ndcg_at_100
value: 35.045
- type: ndcg_at_1000
value: 40.803
- type: ndcg_at_20
value: 28.584
- type: ndcg_at_3
value: 23.249
- type: ndcg_at_5
value: 20.533
- type: precision_at_1
value: 28.999999999999996
- type: precision_at_10
value: 13.120000000000001
- type: precision_at_100
value: 2.7470000000000003
- type: precision_at_1000
value: 0.41200000000000003
- type: precision_at_20
value: 8.584999999999999
- type: precision_at_3
value: 21.633
- type: precision_at_5
value: 18.099999999999998
- type: recall_at_1
value: 5.893000000000001
- type: recall_at_10
value: 26.567
- type: recall_at_100
value: 55.800000000000004
- type: recall_at_1000
value: 83.608
- type: recall_at_20
value: 34.86
- type: recall_at_3
value: 13.153
- type: recall_at_5
value: 18.323
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 86.57284584320382
- type: cosine_spearman
value: 82.20531642680812
- type: euclidean_pearson
value: 83.94261758556554
- type: euclidean_spearman
value: 82.20721497738559
- type: main_score
value: 82.20531642680812
- type: manhattan_pearson
value: 84.15902154703083
- type: manhattan_spearman
value: 82.19506027155957
- type: pearson
value: 86.57284584320382
- type: spearman
value: 82.20531642680812
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 86.28047602146931
- type: cosine_spearman
value: 79.51504881448884
- type: euclidean_pearson
value: 83.10545189967856
- type: euclidean_spearman
value: 79.50586960492797
- type: main_score
value: 79.51504881448884
- type: manhattan_pearson
value: 83.44244457500889
- type: manhattan_spearman
value: 79.730303339846
- type: pearson
value: 86.28047602146931
- type: spearman
value: 79.51504881448884
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 88.74723553048702
- type: cosine_spearman
value: 89.18936052329725
- type: euclidean_pearson
value: 88.90400878928668
- type: euclidean_spearman
value: 89.19174821431281
- type: main_score
value: 89.18936052329725
- type: manhattan_pearson
value: 88.81504628424054
- type: manhattan_spearman
value: 89.18063294142597
- type: pearson
value: 88.74723553048702
- type: spearman
value: 89.18936052329725
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 86.45403437836023
- type: cosine_spearman
value: 85.14654611519086
- type: euclidean_pearson
value: 85.87509624462743
- type: euclidean_spearman
value: 85.1391108856681
- type: main_score
value: 85.14654611519086
- type: manhattan_pearson
value: 85.96635794953866
- type: manhattan_spearman
value: 85.3271371527667
- type: pearson
value: 86.45403437836023
- type: spearman
value: 85.14654611519086
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 87.84742260009705
- type: cosine_spearman
value: 89.10215217191254
- type: euclidean_pearson
value: 88.97393286325477
- type: euclidean_spearman
value: 89.1014105509662
- type: main_score
value: 89.10215217191254
- type: manhattan_pearson
value: 89.31698781090151
- type: manhattan_spearman
value: 89.53000001764433
- type: pearson
value: 87.84742260009705
- type: spearman
value: 89.10215217191254
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 85.22397535461835
- type: cosine_spearman
value: 87.14066355879785
- type: euclidean_pearson
value: 86.31393364087295
- type: euclidean_spearman
value: 87.14018892702765
- type: main_score
value: 87.14066355879785
- type: manhattan_pearson
value: 86.36366855248434
- type: manhattan_spearman
value: 87.20858630423012
- type: pearson
value: 85.22397535461835
- type: spearman
value: 87.14066355879785
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 90.66131612061355
- type: cosine_spearman
value: 90.97082650129164
- type: euclidean_pearson
value: 90.98181906744969
- type: euclidean_spearman
value: 90.99008476850047
- type: main_score
value: 90.97082650129164
- type: manhattan_pearson
value: 90.75245040709021
- type: manhattan_spearman
value: 90.6199877691265
- type: pearson
value: 90.66131612061355
- type: spearman
value: 90.97082650129164
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 67.270656447085
- type: cosine_spearman
value: 67.82870469746828
- type: euclidean_pearson
value: 69.03857775285664
- type: euclidean_spearman
value: 67.74455108773341
- type: main_score
value: 67.82870469746828
- type: manhattan_pearson
value: 69.25304172245812
- type: manhattan_spearman
value: 68.00987097916055
- type: pearson
value: 67.270656447085
- type: spearman
value: 67.82870469746828
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 87.17245205384889
- type: cosine_spearman
value: 87.7360146030987
- type: euclidean_pearson
value: 87.48919412794656
- type: euclidean_spearman
value: 87.7312047878383
- type: main_score
value: 87.7360146030987
- type: manhattan_pearson
value: 87.61476224354806
- type: manhattan_spearman
value: 87.95220889254693
- type: pearson
value: 87.17245205384889
- type: spearman
value: 87.7360146030987
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 88.43547871921146
- type: map
value: 88.43547871921146
- type: mrr
value: 96.5564473652709
- type: nAUC_map_diff1
value: -13.66029392579231
- type: nAUC_map_max
value: 50.325613574053506
- type: nAUC_map_std
value: 60.02986231275796
- type: nAUC_mrr_diff1
value: 23.83821476411125
- type: nAUC_mrr_max
value: 86.72643311769906
- type: nAUC_mrr_std
value: 72.12741063469213
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 78.233
- type: map_at_1
value: 61.49400000000001
- type: map_at_10
value: 73.30600000000001
- type: map_at_100
value: 73.719
- type: map_at_1000
value: 73.724
- type: map_at_20
value: 73.611
- type: map_at_3
value: 70.626
- type: map_at_5
value: 72.417
- type: mrr_at_1
value: 64.66666666666666
- type: mrr_at_10
value: 74.30357142857143
- type: mrr_at_100
value: 74.56950898079988
- type: mrr_at_1000
value: 74.57295833098681
- type: mrr_at_20
value: 74.46165223665226
- type: mrr_at_3
value: 72.3888888888889
- type: mrr_at_5
value: 73.60555555555557
- type: nauc_map_at_1000_diff1
value: 76.51524604780636
- type: nauc_map_at_1000_max
value: 53.48521938401881
- type: nauc_map_at_1000_std
value: -7.347799382158861
- type: nauc_map_at_100_diff1
value: 76.5122888096236
- type: nauc_map_at_100_max
value: 53.49221847471618
- type: nauc_map_at_100_std
value: -7.329683735681086
- type: nauc_map_at_10_diff1
value: 76.30928630674504
- type: nauc_map_at_10_max
value: 53.00102977185941
- type: nauc_map_at_10_std
value: -7.7467740085108705
- type: nauc_map_at_1_diff1
value: 79.54189281784247
- type: nauc_map_at_1_max
value: 46.630071622109526
- type: nauc_map_at_1_std
value: -14.395943134644112
- type: nauc_map_at_20_diff1
value: 76.41604361947962
- type: nauc_map_at_20_max
value: 53.578883876146875
- type: nauc_map_at_20_std
value: -7.403103451288041
- type: nauc_map_at_3_diff1
value: 76.25911617571941
- type: nauc_map_at_3_max
value: 49.140287380513605
- type: nauc_map_at_3_std
value: -11.35992449218983
- type: nauc_map_at_5_diff1
value: 76.35122077770336
- type: nauc_map_at_5_max
value: 52.1744367901208
- type: nauc_map_at_5_std
value: -7.85753955055384
- type: nauc_mrr_at_1000_diff1
value: 76.97223309515867
- type: nauc_mrr_at_1000_max
value: 57.263787498613326
- type: nauc_mrr_at_1000_std
value: -4.884090708840035
- type: nauc_mrr_at_100_diff1
value: 76.97312970894603
- type: nauc_mrr_at_100_max
value: 57.26850730446478
- type: nauc_mrr_at_100_std
value: -4.875200894216617
- type: nauc_mrr_at_10_diff1
value: 76.65927674223613
- type: nauc_mrr_at_10_max
value: 57.30979763941454
- type: nauc_mrr_at_10_std
value: -4.863331094022142
- type: nauc_mrr_at_1_diff1
value: 80.0454932568644
- type: nauc_mrr_at_1_max
value: 56.76038421319305
- type: nauc_mrr_at_1_std
value: -4.101939392632653
- type: nauc_mrr_at_20_diff1
value: 76.87237970440503
- type: nauc_mrr_at_20_max
value: 57.33843605225869
- type: nauc_mrr_at_20_std
value: -4.96248984417978
- type: nauc_mrr_at_3_diff1
value: 76.74130186666727
- type: nauc_mrr_at_3_max
value: 56.19313244846155
- type: nauc_mrr_at_3_std
value: -5.684365934009136
- type: nauc_mrr_at_5_diff1
value: 76.66406918799962
- type: nauc_mrr_at_5_max
value: 57.56110093228628
- type: nauc_mrr_at_5_std
value: -3.7464413085588073
- type: nauc_ndcg_at_1000_diff1
value: 76.19194173971773
- type: nauc_ndcg_at_1000_max
value: 55.57464600170693
- type: nauc_ndcg_at_1000_std
value: -6.0761689532372625
- type: nauc_ndcg_at_100_diff1
value: 76.14631273843654
- type: nauc_ndcg_at_100_max
value: 55.72246565373382
- type: nauc_ndcg_at_100_std
value: -5.595160698860595
- type: nauc_ndcg_at_10_diff1
value: 75.0108223611192
- type: nauc_ndcg_at_10_max
value: 55.27894212877493
- type: nauc_ndcg_at_10_std
value: -6.968331740214591
- type: nauc_ndcg_at_1_diff1
value: 80.0454932568644
- type: nauc_ndcg_at_1_max
value: 56.76038421319305
- type: nauc_ndcg_at_1_std
value: -4.101939392632653
- type: nauc_ndcg_at_20_diff1
value: 75.54887755702472
- type: nauc_ndcg_at_20_max
value: 56.406879417251496
- type: nauc_ndcg_at_20_std
value: -6.495231061329629
- type: nauc_ndcg_at_3_diff1
value: 75.03620356688509
- type: nauc_ndcg_at_3_max
value: 52.147381077773424
- type: nauc_ndcg_at_3_std
value: -8.448005688956199
- type: nauc_ndcg_at_5_diff1
value: 75.1195898074229
- type: nauc_ndcg_at_5_max
value: 54.2321033861173
- type: nauc_ndcg_at_5_std
value: -5.882690780895338
- type: nauc_precision_at_1000_diff1
value: -28.081979732100532
- type: nauc_precision_at_1000_max
value: 35.055348014832916
- type: nauc_precision_at_1000_std
value: 59.61280468927384
- type: nauc_precision_at_100_diff1
value: -25.112740730587458
- type: nauc_precision_at_100_max
value: 38.26331300116496
- type: nauc_precision_at_100_std
value: 62.46316222328831
- type: nauc_precision_at_10_diff1
value: -2.6766206473658833
- type: nauc_precision_at_10_max
value: 45.95321867204845
- type: nauc_precision_at_10_std
value: 45.07212468670564
- type: nauc_precision_at_1_diff1
value: 80.0454932568644
- type: nauc_precision_at_1_max
value: 56.76038421319305
- type: nauc_precision_at_1_std
value: -4.101939392632653
- type: nauc_precision_at_20_diff1
value: -10.698911116738385
- type: nauc_precision_at_20_max
value: 43.467275950182994
- type: nauc_precision_at_20_std
value: 48.00467321991766
- type: nauc_precision_at_3_diff1
value: 33.6344708541193
- type: nauc_precision_at_3_max
value: 49.309242331670504
- type: nauc_precision_at_3_std
value: 21.02940391379915
- type: nauc_precision_at_5_diff1
value: 13.560415600596318
- type: nauc_precision_at_5_max
value: 48.918726500100085
- type: nauc_precision_at_5_std
value: 39.940930429172184
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 70.82166199813196
- type: nauc_recall_at_100_max
value: 76.6106442577042
- type: nauc_recall_at_100_std
value: 66.47992530345513
- type: nauc_recall_at_10_diff1
value: 62.68908885556092
- type: nauc_recall_at_10_max
value: 58.14262437741839
- type: nauc_recall_at_10_std
value: -12.946717875063369
- type: nauc_recall_at_1_diff1
value: 79.54189281784247
- type: nauc_recall_at_1_max
value: 46.630071622109526
- type: nauc_recall_at_1_std
value: -14.395943134644112
- type: nauc_recall_at_20_diff1
value: 65.79470497876567
- type: nauc_recall_at_20_max
value: 71.68308183488456
- type: nauc_recall_at_20_std
value: -12.556850697268453
- type: nauc_recall_at_3_diff1
value: 68.3240211318129
- type: nauc_recall_at_3_max
value: 45.05998217275036
- type: nauc_recall_at_3_std
value: -14.23179772593869
- type: nauc_recall_at_5_diff1
value: 67.53366869904056
- type: nauc_recall_at_5_max
value: 53.57935627081027
- type: nauc_recall_at_5_std
value: -3.3271112904853393
- type: ndcg_at_1
value: 64.667
- type: ndcg_at_10
value: 78.233
- type: ndcg_at_100
value: 79.806
- type: ndcg_at_1000
value: 79.92099999999999
- type: ndcg_at_20
value: 79.006
- type: ndcg_at_3
value: 74.018
- type: ndcg_at_5
value: 76.334
- type: precision_at_1
value: 64.667
- type: precision_at_10
value: 10.4
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.383
- type: precision_at_3
value: 29.444
- type: precision_at_5
value: 19.467000000000002
- type: recall_at_1
value: 61.49400000000001
- type: recall_at_10
value: 92.156
- type: recall_at_100
value: 99.167
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 94.833
- type: recall_at_3
value: 80.833
- type: recall_at_5
value: 86.6
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.8039603960396
- type: cosine_accuracy_threshold
value: 84.54211950302124
- type: cosine_ap
value: 95.59056372734358
- type: cosine_f1
value: 90.1394422310757
- type: cosine_f1_threshold
value: 84.54211950302124
- type: cosine_precision
value: 89.78174603174604
- type: cosine_recall
value: 90.5
- type: dot_accuracy
value: 99.80594059405941
- type: dot_accuracy_threshold
value: 85.57180166244507
- type: dot_ap
value: 95.53453431914399
- type: dot_f1
value: 90.10442565887618
- type: dot_f1_threshold
value: 84.59715843200684
- type: dot_precision
value: 89.61424332344214
- type: dot_recall
value: 90.60000000000001
- type: euclidean_accuracy
value: 99.8039603960396
- type: euclidean_accuracy_threshold
value: 53.253382444381714
- type: euclidean_ap
value: 95.5850992402159
- type: euclidean_f1
value: 90.09457441513192
- type: euclidean_f1_threshold
value: 55.725520849227905
- type: euclidean_precision
value: 89.69276511397423
- type: euclidean_recall
value: 90.5
- type: main_score
value: 95.7485189884476
- type: manhattan_accuracy
value: 99.81485148514851
- type: manhattan_accuracy_threshold
value: 3491.29638671875
- type: manhattan_ap
value: 95.7485189884476
- type: manhattan_f1
value: 90.464048954615
- type: manhattan_f1_threshold
value: 3491.29638671875
- type: manhattan_precision
value: 92.2996878251821
- type: manhattan_recall
value: 88.7
- type: max_ap
value: 95.7485189884476
- type: max_f1
value: 90.464048954615
- type: max_precision
value: 92.2996878251821
- type: max_recall
value: 90.60000000000001
- type: similarity_accuracy
value: 99.8039603960396
- type: similarity_accuracy_threshold
value: 84.54211950302124
- type: similarity_ap
value: 95.59056372734358
- type: similarity_f1
value: 90.1394422310757
- type: similarity_f1_threshold
value: 84.54211950302124
- type: similarity_precision
value: 89.78174603174604
- type: similarity_recall
value: 90.5
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 78.49205191950675
- type: v_measure
value: 78.49205191950675
- type: v_measure_std
value: 2.84869550699959
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 48.90421736513028
- type: v_measure
value: 48.90421736513028
- type: v_measure_std
value: 1.6875865714471023
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 52.9874730481696
- type: map
value: 52.9874730481696
- type: mrr
value: 53.85867604617604
- type: nAUC_map_diff1
value: 39.633429293407616
- type: nAUC_map_max
value: 10.236807988858546
- type: nAUC_map_std
value: 10.276522217929674
- type: nAUC_mrr_diff1
value: 40.0543079218377
- type: nAUC_mrr_max
value: 10.96209807382042
- type: nAUC_mrr_std
value: 10.524400196109918
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.727801109114232
- type: cosine_spearman
value: 31.66058223980157
- type: dot_pearson
value: 30.78818248622866
- type: dot_spearman
value: 31.525158776890265
- type: main_score
value: 31.66058223980157
- type: pearson
value: 30.727801109114232
- type: spearman
value: 31.66058223980157
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 85.206
- type: map_at_1
value: 0.246
- type: map_at_10
value: 2.1950000000000003
- type: map_at_100
value: 14.179
- type: map_at_1000
value: 35.037
- type: map_at_20
value: 4.143
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.135
- type: mrr_at_1
value: 94.0
- type: mrr_at_10
value: 96.66666666666666
- type: mrr_at_100
value: 96.66666666666666
- type: mrr_at_1000
value: 96.66666666666666
- type: mrr_at_20
value: 96.66666666666666
- type: mrr_at_3
value: 96.66666666666666
- type: mrr_at_5
value: 96.66666666666666
- type: nauc_map_at_1000_diff1
value: -4.6264497624527525
- type: nauc_map_at_1000_max
value: 44.594457564749355
- type: nauc_map_at_1000_std
value: 73.17642341400133
- type: nauc_map_at_100_diff1
value: 23.451335157405726
- type: nauc_map_at_100_max
value: 25.426398857299525
- type: nauc_map_at_100_std
value: 64.07416694472633
- type: nauc_map_at_10_diff1
value: 46.57568738568346
- type: nauc_map_at_10_max
value: 9.693233249079238
- type: nauc_map_at_10_std
value: 28.549530265164357
- type: nauc_map_at_1_diff1
value: 53.48238396620123
- type: nauc_map_at_1_max
value: 0.33476619393733076
- type: nauc_map_at_1_std
value: 8.906362219128463
- type: nauc_map_at_20_diff1
value: 39.40719602207749
- type: nauc_map_at_20_max
value: 9.635915072074045
- type: nauc_map_at_20_std
value: 35.15634791346394
- type: nauc_map_at_3_diff1
value: 53.11784737840137
- type: nauc_map_at_3_max
value: 3.059682761072153
- type: nauc_map_at_3_std
value: 21.310633086556617
- type: nauc_map_at_5_diff1
value: 49.91570701185436
- type: nauc_map_at_5_max
value: 8.045082896244576
- type: nauc_map_at_5_std
value: 20.597686235051647
- type: nauc_mrr_at_1000_diff1
value: 41.98412698412726
- type: nauc_mrr_at_1000_max
value: 78.24463118580779
- type: nauc_mrr_at_1000_std
value: 0.30812324930028195
- type: nauc_mrr_at_100_diff1
value: 41.98412698412726
- type: nauc_mrr_at_100_max
value: 78.24463118580779
- type: nauc_mrr_at_100_std
value: 0.30812324930028195
- type: nauc_mrr_at_10_diff1
value: 41.98412698412726
- type: nauc_mrr_at_10_max
value: 78.24463118580779
- type: nauc_mrr_at_10_std
value: 0.30812324930028195
- type: nauc_mrr_at_1_diff1
value: 38.62433862433873
- type: nauc_mrr_at_1_max
value: 80.78120136943666
- type: nauc_mrr_at_1_std
value: -10.768751945222197
- type: nauc_mrr_at_20_diff1
value: 41.98412698412726
- type: nauc_mrr_at_20_max
value: 78.24463118580779
- type: nauc_mrr_at_20_std
value: 0.30812324930028195
- type: nauc_mrr_at_3_diff1
value: 41.98412698412726
- type: nauc_mrr_at_3_max
value: 78.24463118580779
- type: nauc_mrr_at_3_std
value: 0.30812324930028195
- type: nauc_mrr_at_5_diff1
value: 41.98412698412726
- type: nauc_mrr_at_5_max
value: 78.24463118580779
- type: nauc_mrr_at_5_std
value: 0.30812324930028195
- type: nauc_ndcg_at_1000_diff1
value: 0.5174948602880207
- type: nauc_ndcg_at_1000_max
value: 48.60686602077053
- type: nauc_ndcg_at_1000_std
value: 75.72456343175277
- type: nauc_ndcg_at_100_diff1
value: -20.747252137999254
- type: nauc_ndcg_at_100_max
value: 49.985132618254994
- type: nauc_ndcg_at_100_std
value: 61.096383293836574
- type: nauc_ndcg_at_10_diff1
value: 6.791377920463332
- type: nauc_ndcg_at_10_max
value: 57.50019332833286
- type: nauc_ndcg_at_10_std
value: 49.201028841219426
- type: nauc_ndcg_at_1_diff1
value: 54.92683440362145
- type: nauc_ndcg_at_1_max
value: 83.8667228129276
- type: nauc_ndcg_at_1_std
value: 1.6738604063586122
- type: nauc_ndcg_at_20_diff1
value: -5.1948699196314925
- type: nauc_ndcg_at_20_max
value: 54.483087684806556
- type: nauc_ndcg_at_20_std
value: 50.54823818118781
- type: nauc_ndcg_at_3_diff1
value: 26.267246500164372
- type: nauc_ndcg_at_3_max
value: 63.0173212926611
- type: nauc_ndcg_at_3_std
value: 41.025597406368256
- type: nauc_ndcg_at_5_diff1
value: 16.910185454343036
- type: nauc_ndcg_at_5_max
value: 60.9328683868778
- type: nauc_ndcg_at_5_std
value: 36.70169905857712
- type: nauc_precision_at_1000_diff1
value: -46.374447765983525
- type: nauc_precision_at_1000_max
value: 35.36052337813863
- type: nauc_precision_at_1000_std
value: 14.219220668161018
- type: nauc_precision_at_100_diff1
value: -29.7838083657744
- type: nauc_precision_at_100_max
value: 43.93589400385112
- type: nauc_precision_at_100_std
value: 55.425045718579945
- type: nauc_precision_at_10_diff1
value: -12.016613405227687
- type: nauc_precision_at_10_max
value: 57.79924427743131
- type: nauc_precision_at_10_std
value: 49.022036703550675
- type: nauc_precision_at_1_diff1
value: 38.62433862433873
- type: nauc_precision_at_1_max
value: 80.78120136943666
- type: nauc_precision_at_1_std
value: -10.768751945222197
- type: nauc_precision_at_20_diff1
value: -23.95633847880195
- type: nauc_precision_at_20_max
value: 48.34715917258276
- type: nauc_precision_at_20_std
value: 48.82198285255887
- type: nauc_precision_at_3_diff1
value: 6.871296905858807
- type: nauc_precision_at_3_max
value: 70.54805793285054
- type: nauc_precision_at_3_std
value: 44.65108624094803
- type: nauc_precision_at_5_diff1
value: -9.074932448759695
- type: nauc_precision_at_5_max
value: 67.41284242437573
- type: nauc_precision_at_5_std
value: 23.876891983919577
- type: nauc_recall_at_1000_diff1
value: 8.142288830293255
- type: nauc_recall_at_1000_max
value: 38.85182826835104
- type: nauc_recall_at_1000_std
value: 68.60783819217335
- type: nauc_recall_at_100_diff1
value: 34.262914076287466
- type: nauc_recall_at_100_max
value: 12.87009658528838
- type: nauc_recall_at_100_std
value: 56.21330603762995
- type: nauc_recall_at_10_diff1
value: 49.33830945338758
- type: nauc_recall_at_10_max
value: 0.3539875530671406
- type: nauc_recall_at_10_std
value: 26.85864465557644
- type: nauc_recall_at_1_diff1
value: 53.48238396620123
- type: nauc_recall_at_1_max
value: 0.33476619393733076
- type: nauc_recall_at_1_std
value: 8.906362219128463
- type: nauc_recall_at_20_diff1
value: 44.21928181266254
- type: nauc_recall_at_20_max
value: -0.9198356057088594
- type: nauc_recall_at_20_std
value: 31.484376992896784
- type: nauc_recall_at_3_diff1
value: 53.038093080990876
- type: nauc_recall_at_3_max
value: -1.4170895916973003
- type: nauc_recall_at_3_std
value: 21.890202855574497
- type: nauc_recall_at_5_diff1
value: 49.39742214825278
- type: nauc_recall_at_5_max
value: 2.8412267611894517
- type: nauc_recall_at_5_std
value: 18.01598921859512
- type: ndcg_at_1
value: 91.0
- type: ndcg_at_10
value: 85.206
- type: ndcg_at_100
value: 67.29
- type: ndcg_at_1000
value: 60.584
- type: ndcg_at_20
value: 82.321
- type: ndcg_at_3
value: 88.642
- type: ndcg_at_5
value: 87.063
- type: precision_at_1
value: 94.0
- type: precision_at_10
value: 89.8
- type: precision_at_100
value: 69.78
- type: precision_at_1000
value: 26.738
- type: precision_at_20
value: 87.2
- type: precision_at_3
value: 92.0
- type: precision_at_5
value: 90.8
- type: recall_at_1
value: 0.246
- type: recall_at_10
value: 2.344
- type: recall_at_100
value: 16.962
- type: recall_at_1000
value: 57.325
- type: recall_at_20
value: 4.517
- type: recall_at_3
value: 0.731
- type: recall_at_5
value: 1.1780000000000002
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 31.455
- type: map_at_1
value: 2.9739999999999998
- type: map_at_10
value: 12.183
- type: map_at_100
value: 18.772
- type: map_at_1000
value: 20.415
- type: map_at_20
value: 14.451
- type: map_at_3
value: 6.507000000000001
- type: map_at_5
value: 8.66
- type: mrr_at_1
value: 40.816326530612244
- type: mrr_at_10
value: 57.70975056689341
- type: mrr_at_100
value: 58.18379126542391
- type: mrr_at_1000
value: 58.18379126542391
- type: mrr_at_20
value: 57.85552316164561
- type: mrr_at_3
value: 54.08163265306123
- type: mrr_at_5
value: 56.42857142857143
- type: nauc_map_at_1000_diff1
value: 3.1567471051481437
- type: nauc_map_at_1000_max
value: -1.5882060729791523
- type: nauc_map_at_1000_std
value: 18.69622198722074
- type: nauc_map_at_100_diff1
value: 3.3449677678147536
- type: nauc_map_at_100_max
value: -2.8928606866168405
- type: nauc_map_at_100_std
value: 15.789984947653412
- type: nauc_map_at_10_diff1
value: 2.9696743570444264
- type: nauc_map_at_10_max
value: -9.096749212011876
- type: nauc_map_at_10_std
value: -5.38545817258353
- type: nauc_map_at_1_diff1
value: 20.680780404542546
- type: nauc_map_at_1_max
value: -7.04722927447817
- type: nauc_map_at_1_std
value: -7.062494733973898
- type: nauc_map_at_20_diff1
value: 4.070437790119271
- type: nauc_map_at_20_max
value: -4.84491434686032
- type: nauc_map_at_20_std
value: 0.5846341109021014
- type: nauc_map_at_3_diff1
value: 11.9634978045925
- type: nauc_map_at_3_max
value: -8.27834591046608
- type: nauc_map_at_3_std
value: -8.687615453381065
- type: nauc_map_at_5_diff1
value: 0.9195191526009436
- type: nauc_map_at_5_max
value: -1.673813362719489
- type: nauc_map_at_5_std
value: -6.67549753473631
- type: nauc_mrr_at_1000_diff1
value: 19.877993208719573
- type: nauc_mrr_at_1000_max
value: -10.37776706406218
- type: nauc_mrr_at_1000_std
value: 7.132169578056367
- type: nauc_mrr_at_100_diff1
value: 19.877993208719573
- type: nauc_mrr_at_100_max
value: -10.37776706406218
- type: nauc_mrr_at_100_std
value: 7.132169578056367
- type: nauc_mrr_at_10_diff1
value: 20.414285568401457
- type: nauc_mrr_at_10_max
value: -9.677800295687861
- type: nauc_mrr_at_10_std
value: 8.001103690180859
- type: nauc_mrr_at_1_diff1
value: 22.393284073955723
- type: nauc_mrr_at_1_max
value: -5.889370191243167
- type: nauc_mrr_at_1_std
value: -1.5183536173658247
- type: nauc_mrr_at_20_diff1
value: 20.455564720604055
- type: nauc_mrr_at_20_max
value: -10.230642830103074
- type: nauc_mrr_at_20_std
value: 7.863582453266621
- type: nauc_mrr_at_3_diff1
value: 17.554895390732618
- type: nauc_mrr_at_3_max
value: -15.618463505555052
- type: nauc_mrr_at_3_std
value: 5.913231577966864
- type: nauc_mrr_at_5_diff1
value: 18.393678507779914
- type: nauc_mrr_at_5_max
value: -11.903593353147762
- type: nauc_mrr_at_5_std
value: 7.580745996262831
- type: nauc_ndcg_at_1000_diff1
value: 13.746937095530473
- type: nauc_ndcg_at_1000_max
value: -0.9319249687895838
- type: nauc_ndcg_at_1000_std
value: 38.56328031451904
- type: nauc_ndcg_at_100_diff1
value: 13.854865944415895
- type: nauc_ndcg_at_100_max
value: -7.142142012591404
- type: nauc_ndcg_at_100_std
value: 35.61341954818848
- type: nauc_ndcg_at_10_diff1
value: 9.010144273248759
- type: nauc_ndcg_at_10_max
value: -15.320014897424574
- type: nauc_ndcg_at_10_std
value: 2.84883880489144
- type: nauc_ndcg_at_1_diff1
value: 20.939533945592967
- type: nauc_ndcg_at_1_max
value: -6.387319972188946
- type: nauc_ndcg_at_1_std
value: -0.5258673122126726
- type: nauc_ndcg_at_20_diff1
value: 14.660827309009496
- type: nauc_ndcg_at_20_max
value: -13.476196120145994
- type: nauc_ndcg_at_20_std
value: 8.22391881710838
- type: nauc_ndcg_at_3_diff1
value: 13.429985227235935
- type: nauc_ndcg_at_3_max
value: -14.904544592570247
- type: nauc_ndcg_at_3_std
value: 1.599779998183342
- type: nauc_ndcg_at_5_diff1
value: 8.085466231900622
- type: nauc_ndcg_at_5_max
value: -9.09591969526831
- type: nauc_ndcg_at_5_std
value: 3.5794092637248505
- type: nauc_precision_at_1000_diff1
value: -9.31941215946743
- type: nauc_precision_at_1000_max
value: 31.52913520470716
- type: nauc_precision_at_1000_std
value: 22.720784312185856
- type: nauc_precision_at_100_diff1
value: 8.958548406995279
- type: nauc_precision_at_100_max
value: 15.100597910674104
- type: nauc_precision_at_100_std
value: 71.04548238175113
- type: nauc_precision_at_10_diff1
value: 12.4698194690008
- type: nauc_precision_at_10_max
value: -15.84870544871496
- type: nauc_precision_at_10_std
value: 7.575297622501928
- type: nauc_precision_at_1_diff1
value: 22.393284073955723
- type: nauc_precision_at_1_max
value: -5.889370191243167
- type: nauc_precision_at_1_std
value: -1.5183536173658247
- type: nauc_precision_at_20_diff1
value: 15.393505718138758
- type: nauc_precision_at_20_max
value: -3.70684298539384
- type: nauc_precision_at_20_std
value: 29.426137824970304
- type: nauc_precision_at_3_diff1
value: 9.997768085465394
- type: nauc_precision_at_3_max
value: -17.12224314347674
- type: nauc_precision_at_3_std
value: -1.343018166772313
- type: nauc_precision_at_5_diff1
value: 3.8936997437913554
- type: nauc_precision_at_5_max
value: -5.689104289687632
- type: nauc_precision_at_5_std
value: 3.181098051304285
- type: nauc_recall_at_1000_diff1
value: 9.908303508158387
- type: nauc_recall_at_1000_max
value: 6.174506592699848
- type: nauc_recall_at_1000_std
value: 77.41931114780012
- type: nauc_recall_at_100_diff1
value: 10.286839241876192
- type: nauc_recall_at_100_max
value: -6.6138697026666815
- type: nauc_recall_at_100_std
value: 49.608313692633224
- type: nauc_recall_at_10_diff1
value: 2.215545846659851
- type: nauc_recall_at_10_max
value: -17.83025802478445
- type: nauc_recall_at_10_std
value: -3.3784768673705465
- type: nauc_recall_at_1_diff1
value: 20.680780404542546
- type: nauc_recall_at_1_max
value: -7.04722927447817
- type: nauc_recall_at_1_std
value: -7.062494733973898
- type: nauc_recall_at_20_diff1
value: 6.974410239251615
- type: nauc_recall_at_20_max
value: -14.161147924731646
- type: nauc_recall_at_20_std
value: 9.328412057721454
- type: nauc_recall_at_3_diff1
value: 7.904589805754212
- type: nauc_recall_at_3_max
value: -12.1912388648593
- type: nauc_recall_at_3_std
value: -9.221542013385555
- type: nauc_recall_at_5_diff1
value: -3.2604132752706914
- type: nauc_recall_at_5_max
value: -6.886351441658915
- type: nauc_recall_at_5_std
value: -7.014252851712789
- type: ndcg_at_1
value: 39.796
- type: ndcg_at_10
value: 31.455
- type: ndcg_at_100
value: 42.388999999999996
- type: ndcg_at_1000
value: 53.556000000000004
- type: ndcg_at_20
value: 30.808000000000003
- type: ndcg_at_3
value: 35.831
- type: ndcg_at_5
value: 32.845
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 27.143
- type: precision_at_100
value: 8.449
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_20
value: 19.387999999999998
- type: precision_at_3
value: 35.374
- type: precision_at_5
value: 31.019999999999996
- type: recall_at_1
value: 2.9739999999999998
- type: recall_at_10
value: 19.39
- type: recall_at_100
value: 51.636
- type: recall_at_1000
value: 86.99900000000001
- type: recall_at_20
value: 26.478
- type: recall_at_3
value: 7.703
- type: recall_at_5
value: 11.42
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 86.9384765625
- type: ap
value: 31.737513704141552
- type: ap_weighted
value: 31.737513704141552
- type: f1
value: 71.5490757306975
- type: f1_weighted
value: 89.14632533489856
- type: main_score
value: 86.9384765625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 73.57668364459535
- type: f1
value: 73.90467103648074
- type: f1_weighted
value: 73.42158415034704
- type: main_score
value: 73.57668364459535
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 58.574148097494685
- type: v_measure
value: 58.574148097494685
- type: v_measure_std
value: 0.9443161637490822
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 88.1385229778864
- type: cosine_accuracy_threshold
value: 83.86307954788208
- type: cosine_ap
value: 80.17965893449055
- type: cosine_f1
value: 73.0614300100705
- type: cosine_f1_threshold
value: 80.7942807674408
- type: cosine_precision
value: 69.8603755416466
- type: cosine_recall
value: 76.56992084432717
- type: dot_accuracy
value: 88.2100494724921
- type: dot_accuracy_threshold
value: 83.84793996810913
- type: dot_ap
value: 80.18603932881858
- type: dot_f1
value: 73.07643714466204
- type: dot_f1_threshold
value: 80.87586164474487
- type: dot_precision
value: 70.10909090909091
- type: dot_recall
value: 76.3060686015831
- type: euclidean_accuracy
value: 88.1385229778864
- type: euclidean_accuracy_threshold
value: 56.77661895751953
- type: euclidean_ap
value: 80.1784070881624
- type: euclidean_f1
value: 73.04830369529574
- type: euclidean_f1_threshold
value: 61.91838979721069
- type: euclidean_precision
value: 69.96859144720948
- type: euclidean_recall
value: 76.41160949868075
- type: main_score
value: 80.18603932881858
- type: manhattan_accuracy
value: 88.0431543184121
- type: manhattan_accuracy_threshold
value: 3755.6137084960938
- type: manhattan_ap
value: 79.98270453664578
- type: manhattan_f1
value: 72.68242015061023
- type: manhattan_f1_threshold
value: 3892.494583129883
- type: manhattan_precision
value: 71.54907975460122
- type: manhattan_recall
value: 73.85224274406332
- type: max_ap
value: 80.18603932881858
- type: max_f1
value: 73.07643714466204
- type: max_precision
value: 71.54907975460122
- type: max_recall
value: 76.56992084432717
- type: similarity_accuracy
value: 88.1385229778864
- type: similarity_accuracy_threshold
value: 83.86307954788208
- type: similarity_ap
value: 80.17965893449055
- type: similarity_f1
value: 73.0614300100705
- type: similarity_f1_threshold
value: 80.7942807674408
- type: similarity_precision
value: 69.8603755416466
- type: similarity_recall
value: 76.56992084432717
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 89.7892653393876
- type: cosine_accuracy_threshold
value: 79.69566583633423
- type: cosine_ap
value: 87.4579867302024
- type: cosine_f1
value: 79.91620843152658
- type: cosine_f1_threshold
value: 78.53609323501587
- type: cosine_precision
value: 77.7155329210622
- type: cosine_recall
value: 82.24514936864799
- type: dot_accuracy
value: 89.78732487289945
- type: dot_accuracy_threshold
value: 80.05315661430359
- type: dot_ap
value: 87.44916182456272
- type: dot_f1
value: 79.90419878751591
- type: dot_f1_threshold
value: 78.57890725135803
- type: dot_precision
value: 77.73409057812728
- type: dot_recall
value: 82.19895287958116
- type: euclidean_accuracy
value: 89.78538440641131
- type: euclidean_accuracy_threshold
value: 62.29925751686096
- type: euclidean_ap
value: 87.45904868911386
- type: euclidean_f1
value: 79.93127404474657
- type: euclidean_f1_threshold
value: 65.61101078987122
- type: euclidean_precision
value: 77.62060210373595
- type: euclidean_recall
value: 82.38373883584848
- type: main_score
value: 87.46554314325058
- type: manhattan_accuracy
value: 89.76597974152986
- type: manhattan_accuracy_threshold
value: 3988.5299682617188
- type: manhattan_ap
value: 87.46554314325058
- type: manhattan_f1
value: 79.97181740645973
- type: manhattan_f1_threshold
value: 4235.905838012695
- type: manhattan_precision
value: 77.13713427283783
- type: manhattan_recall
value: 83.02279026793964
- type: max_ap
value: 87.46554314325058
- type: max_f1
value: 79.97181740645973
- type: max_precision
value: 77.73409057812728
- type: max_recall
value: 83.02279026793964
- type: similarity_accuracy
value: 89.7892653393876
- type: similarity_accuracy_threshold
value: 79.69566583633423
- type: similarity_ap
value: 87.4579867302024
- type: similarity_f1
value: 79.91620843152658
- type: similarity_f1_threshold
value: 78.53609323501587
- type: similarity_precision
value: 77.7155329210622
- type: similarity_recall
value: 82.24514936864799
---
# Updates
New open-source models and ToDoList will be listed on https://github.com/DunZhang/Stella/blob/main/news_and_todo.md.
You can also find these models on my [homepage](https://huggingface.co/infgrad).
# Introduction
The models are trained based on `Alibaba-NLP/gte-large-en-v1.5` and `Alibaba-NLP/gte-Qwen2-1.5B-instruct`. Thanks for
their contributions!
**We simplify usage of prompts, providing two prompts for most general tasks, one is for s2p, another one is for s2s.**
Prompt of s2p task(e.g. retrieve task):
```text
Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: {query}
```
Prompt of s2s task(e.g. semantic textual similarity task):
```text
Instruct: Retrieve semantically similar text.\nQuery: {query}
```
The models are finally trained by [MRL](https://arxiv.org/abs/2205.13147), so they have multiple dimensions: 512, 768,
1024, 2048, 4096, 6144 and 8192.
The higher the dimension, the better the performance.
**Generally speaking, 1024d is good enough.** The MTEB score of 1024d is only 0.001 lower than 8192d.
# Model directory structure
The model directory structure is very simple, it is a standard SentenceTransformer directory **with a series
of `2_Dense_{dims}`
folders**, where `dims` represents the final vector dimension.
For example, the `2_Dense_256` folder stores Linear weights that convert vector dimensions to 256 dimensions.
Please refer to the following chapters for specific instructions on how to use them.
# Usage
You can use `SentenceTransformers` or `transformers` library to encode text.
## Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
# This model supports two prompts: "s2p_query" and "s2s_query" for sentence-to-passage and sentence-to-sentence tasks, respectively.
# They are defined in `config_sentence_transformers.json`
query_prompt_name = "s2p_query"
queries = [
"What are some ways to reduce stress?",
"What are the benefits of drinking green tea?",
]
# docs do not need any prompts
docs = [
"There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
"Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
]
# !The default dimension is 1024, if you need other dimensions, please clone the model and modify `modules.json` to replace `2_Dense_1024` with another dimension, e.g. `2_Dense_256` or `2_Dense_8192` !
# on gpu
model = SentenceTransformer("dunzhang/stella_en_400M_v5", trust_remote_code=True).cuda()
# you can also use this model without the features of `use_memory_efficient_attention` and `unpad_inputs`. It can be worked in CPU.
# model = SentenceTransformer(
# "dunzhang/stella_en_400M_v5",
# trust_remote_code=True,
# device="cpu",
# config_kwargs={"use_memory_efficient_attention": False, "unpad_inputs": False}
# )
query_embeddings = model.encode(queries, prompt_name=query_prompt_name)
doc_embeddings = model.encode(docs)
print(query_embeddings.shape, doc_embeddings.shape)
# (2, 1024) (2, 1024)
similarities = model.similarity(query_embeddings, doc_embeddings)
print(similarities)
# tensor([[0.8398, 0.2990],
# [0.3282, 0.8095]])
```
## Transformers
```python
import os
import torch
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
query_prompt = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "
queries = [
"What are some ways to reduce stress?",
"What are the benefits of drinking green tea?",
]
queries = [query_prompt + query for query in queries]
# docs do not need any prompts
docs = [
"There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
"Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
]
# The path of your model after cloning it
model_dir = "{Your MODEL_PATH}"
vector_dim = 1024
vector_linear_directory = f"2_Dense_{vector_dim}"
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval()
# you can also use this model without the features of `use_memory_efficient_attention` and `unpad_inputs`. It can be worked in CPU.
# model = AutoModel.from_pretrained(model_dir, trust_remote_code=True,use_memory_efficient_attention=False,unpad_inputs=False).cuda().eval()
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
vector_linear = torch.nn.Linear(in_features=model.config.hidden_size, out_features=vector_dim)
vector_linear_dict = {
k.replace("linear.", ""): v for k, v in
torch.load(os.path.join(model_dir, f"{vector_linear_directory}/pytorch_model.bin")).items()
}
vector_linear.load_state_dict(vector_linear_dict)
vector_linear.cuda()
# Embed the queries
with torch.no_grad():
input_data = tokenizer(queries, padding="longest", truncation=True, max_length=512, return_tensors="pt")
input_data = {k: v.cuda() for k, v in input_data.items()}
attention_mask = input_data["attention_mask"]
last_hidden_state = model(**input_data)[0]
last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
query_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
query_vectors = normalize(vector_linear(query_vectors).cpu().numpy())
# Embed the documents
with torch.no_grad():
input_data = tokenizer(docs, padding="longest", truncation=True, max_length=512, return_tensors="pt")
input_data = {k: v.cuda() for k, v in input_data.items()}
attention_mask = input_data["attention_mask"]
last_hidden_state = model(**input_data)[0]
last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
docs_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
docs_vectors = normalize(vector_linear(docs_vectors).cpu().numpy())
print(query_vectors.shape, docs_vectors.shape)
# (2, 1024) (2, 1024)
similarities = query_vectors @ docs_vectors.T
print(similarities)
# [[0.8397531 0.29900077]
# [0.32818374 0.80954516]]
```
### infinity_emb
Usage via [infinity, MIT Licensed](https://github.com/michaelfeil/infinity).
```bash
docker run \
--gpus all -p "7997":"7997" \
michaelf34/infinity:0.0.69 \
v2 --model-id dunzhang/stella_en_400M_v5 --revision "refs/pr/24" --dtype bfloat16 --batch-size 16 --device cuda --engine torch --port 7997 --no-bettertransformer
```
# FAQ
Q: The details of training?
A: The training method and datasets will be released in the future. (specific time unknown, may be provided in a paper)
Q: How to choose a suitable prompt for my own task?
A: In most cases, please use the s2p and s2s prompts. These two prompts account for the vast majority of the training
data.
Q: How to reproduce MTEB results?
A: Please use evaluation scripts in `Alibaba-NLP/gte-Qwen2-1.5B-instruct` or `intfloat/e5-mistral-7b-instruct`
Q: Why each dimension has a linear weight?
A: MRL has multiple training methods, we choose this method which has the best performance.
Q: What is the sequence length of models?
A: 512 is recommended, in our experiments, almost all models perform poorly on specialized long text retrieval datasets. Besides, the
model is trained on datasets of 512 length. This may be an optimization term.
If you have any questions, please start a discussion on community. | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
mogaio/pr_ebsa_fr_tran_merged25_e1_end_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T19:01:07 | 2023-12-15T19:02:21 | 51 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.9434954007884363
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9361702127659575
recall: 0.9322033898305084
f1-score: 0.9341825902335456
support: 236
'1':
precision: 0.9333333333333333
recall: 0.9302325581395349
f1-score: 0.9317803660565723
support: 301
'2':
precision: 0.9646017699115044
recall: 0.9732142857142857
f1-score: 0.9688888888888889
support: 224
accuracy: 0.9434954007884363
macro avg:
precision: 0.9447017720035985
recall: 0.945216744561443
f1-score: 0.9449506150596689
support: 761
weighted avg:
precision: 0.9434169513880108
recall: 0.9434954007884363
f1-score: 0.9434482162802315
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_end_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
| 0.0013 | 1 | 0.0015 | - |
| 0.0658 | 50 | 0.1973 | - |
| 0.1316 | 100 | 0.0106 | - |
| 0.1974 | 150 | 0.0744 | - |
| 0.2632 | 200 | 0.1033 | - |
| 0.3289 | 250 | 0.0425 | - |
| 0.3947 | 300 | 0.1125 | - |
| 0.4605 | 350 | 0.0018 | - |
| 0.5263 | 400 | 0.0019 | - |
| 0.5921 | 450 | 0.0002 | - |
| 0.6579 | 500 | 0.0007 | - |
| 0.7237 | 550 | 0.1393 | - |
| 0.7895 | 600 | 0.0002 | - |
| 0.8553 | 650 | 0.0043 | - |
| 0.9211 | 700 | 0.0339 | - |
| 0.9868 | 750 | 0.0002 | - |
| 0.0013 | 1 | 0.0007 | - |
| 0.0658 | 50 | 0.0419 | - |
| 0.1316 | 100 | 0.0068 | - |
| 0.1974 | 150 | 0.1401 | - |
| 0.2632 | 200 | 0.0423 | - |
| 0.3289 | 250 | 0.1122 | - |
| 0.3947 | 300 | 0.0037 | - |
| 0.4605 | 350 | 0.005 | - |
| 0.5263 | 400 | 0.0006 | - |
| 0.5921 | 450 | 0.0006 | - |
| 0.6579 | 500 | 0.0016 | - |
| 0.7237 | 550 | 0.1244 | - |
| 0.7895 | 600 | 0.0016 | - |
| 0.8553 | 650 | 0.0028 | - |
| 0.9211 | 700 | 0.002 | - |
| 0.9868 | 750 | 0.057 | - |
| 0.0013 | 1 | 0.1396 | - |
| 0.0658 | 50 | 0.0366 | - |
| 0.1316 | 100 | 0.0021 | - |
| 0.1974 | 150 | 0.1088 | - |
| 0.2632 | 200 | 0.0449 | - |
| 0.3289 | 250 | 0.0187 | - |
| 0.3947 | 300 | 0.0017 | - |
| 0.4605 | 350 | 0.1262 | - |
| 0.5263 | 400 | 0.0052 | - |
| 0.5921 | 450 | 0.1188 | - |
| 0.6579 | 500 | 0.0002 | - |
| 0.7237 | 550 | 0.0006 | - |
| 0.7895 | 600 | 0.0758 | - |
| 0.8553 | 650 | 0.025 | - |
| 0.9211 | 700 | 0.0052 | - |
| 0.9868 | 750 | 0.1985 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-07-13T17:52:53 | 2024-07-15T04:25:37 | 51 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
UNAversal-8x7B-v1beta - GGUF
- Model creator: https://huggingface.co/fblgit/
- Original model: https://huggingface.co/fblgit/UNAversal-8x7B-v1beta/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [UNAversal-8x7B-v1beta.Q2_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q2_K.gguf) | Q2_K | 16.12GB |
| [UNAversal-8x7B-v1beta.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_XS.gguf) | IQ3_XS | 18.02GB |
| [UNAversal-8x7B-v1beta.IQ3_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_S.gguf) | IQ3_S | 19.03GB |
| [UNAversal-8x7B-v1beta.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_S.gguf) | Q3_K_S | 19.03GB |
| [UNAversal-8x7B-v1beta.IQ3_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ3_M.gguf) | IQ3_M | 19.96GB |
| [UNAversal-8x7B-v1beta.Q3_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K.gguf) | Q3_K | 21.0GB |
| [UNAversal-8x7B-v1beta.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_M.gguf) | Q3_K_M | 21.0GB |
| [UNAversal-8x7B-v1beta.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q3_K_L.gguf) | Q3_K_L | 22.51GB |
| [UNAversal-8x7B-v1beta.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ4_XS.gguf) | IQ4_XS | 23.63GB |
| [UNAversal-8x7B-v1beta.Q4_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_0.gguf) | Q4_0 | 24.63GB |
| [UNAversal-8x7B-v1beta.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.IQ4_NL.gguf) | IQ4_NL | 24.91GB |
| [UNAversal-8x7B-v1beta.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K_S.gguf) | Q4_K_S | 24.91GB |
| [UNAversal-8x7B-v1beta.Q4_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K.gguf) | Q4_K | 26.49GB |
| [UNAversal-8x7B-v1beta.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_K_M.gguf) | Q4_K_M | 26.49GB |
| [UNAversal-8x7B-v1beta.Q4_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q4_1.gguf) | Q4_1 | 27.32GB |
| [UNAversal-8x7B-v1beta.Q5_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_0.gguf) | Q5_0 | 30.02GB |
| [UNAversal-8x7B-v1beta.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K_S.gguf) | Q5_K_S | 30.02GB |
| [UNAversal-8x7B-v1beta.Q5_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K.gguf) | Q5_K | 30.95GB |
| [UNAversal-8x7B-v1beta.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_K_M.gguf) | Q5_K_M | 30.95GB |
| [UNAversal-8x7B-v1beta.Q5_1.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q5_1.gguf) | Q5_1 | 32.71GB |
| [UNAversal-8x7B-v1beta.Q6_K.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/blob/main/UNAversal-8x7B-v1beta.Q6_K.gguf) | Q6_K | 35.74GB |
| [UNAversal-8x7B-v1beta.Q8_0.gguf](https://huggingface.co/RichardErkhov/fblgit_-_UNAversal-8x7B-v1beta-gguf/tree/main/) | Q8_0 | 46.22GB |
Original model description:
---
language:
- en
license: cc-by-nc-sa-4.0
library_name: transformers
tags:
- UNA
- juanako
- mixtral
- MoE
model-index:
- name: UNAversal-8x7B-v1beta
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.9
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 70.39
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.97
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.64
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=fblgit/UNAversal-8x7B-v1beta
name: Open LLM Leaderboard
---
# UNAversal - Uniform Neural Alignment (MoE)
This is just a beta, a first release so people can start working on franksteins and so.
It does achieve high GSM/Math and TQA, so ideally you can merge it with other mixtrals and see what coming out of it
Based on [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
## UNA Details
For this model we came out with the most obvious, placing UNA on the router_logit. It does work, but we saw a much better performance on SFT by doing so.
So this model DOES have UNA-SFT phase, its highly experimental and it was merely using LLaMA-Factory datasets by example alpaca.
As the others:
- Can be finetuned further, try 2e-5 or **1e-4 (since its MOE)**
- Can be merged, here you will have to improvise and please report findings on a discussion thread.
**REMINDER**: please.. cite, it does help on the research and the lab itself, seriously.
## NEED YOUR HELP!!
I need a multi-turn trainloop for the Mixtral, that can squeeze the juice out of 8xH100's properly. Please feel free to reach @fblgit either discord or twitter. thanks!
# Evals
Here there are some, but we also submitted it to the HF eval queue....
## GSM8k 5-Shot
```
|Tasks|Version| Filter |n-shot| Metric |Value | |Stderr|
|-----|-------|----------|-----:|-----------|-----:|---|-----:|
|gsm8k|Yaml |get-answer| 5|exact_match|0.6603|± | 0.013|
```
## ARC 25-Shot
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|-------|------|-----:|--------|-----:|---|-----:|
|arc_challenge|Yaml |none | 25|acc |0.6621|± |0.0138|
| | |none | 25|acc_norm|0.6962|± |0.0134|
```
## TruthfulQA 0-Shot (MC2)
```
| Tasks |Version|Filter|n-shot|Metric|Value | |Stderr|
|--------------|-------|------|-----:|------|-----:|---|-----:|
|truthfulqa_mc2|Yaml |none | 0|acc |0.7122|± |0.0141|
```
## 0-Shots Evals
```
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|--------------|-------|------|-----:|----------|-----:|---|-----:|
|arc_challenge |Yaml |none | 0|acc |0.6101|± |0.0143|
| | |none | 0|acc_norm |0.6425|± |0.0140|
|arc_easy |Yaml |none | 0|acc |0.8615|± |0.0071|
| | |none | 0|acc_norm |0.8375|± |0.0076|
|boolq |Yaml |none | 0|acc |0.8624|± |0.0060|
|lambada_openai|Yaml |none | 0|perplexity|2.8318|± |0.0507|
| | |none | 0|acc |0.7650|± |0.0059|
|mathqa |Yaml |none | 0|acc |0.4472|± |0.0091|
| | |none | 0|acc_norm |0.4436|± |0.0091|
|piqa |Yaml |none | 0|acc |0.8292|± |0.0088|
| | |none | 0|acc_norm |0.8422|± |0.0085|
|pubmedqa |Yaml |none | 0|acc |0.7920|± |0.0182|
|sciq |Yaml |none | 0|acc |0.9630|± |0.0060|
| | |none | 0|acc_norm |0.9370|± |0.0077|
```
## BBH at 0-Shot
```
vllm (pretrained=fblgit/UNAversal-8x7B-v1beta,tensor_parallel_size=2,data_parallel_size=4,gpu_memory_utilization=0.8,dtype=float16), gen_kwargs: (None), limit: None, num_fewshot: 0, batch_size: auto
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|----------------------------------------------------------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
| - bbh_cot_fewshot_boolean_expressions |Yaml |get-answer| 0|exact_match|0.8840|± |0.0203|
| - bbh_cot_fewshot_causal_judgement |Yaml |get-answer| 0|exact_match|0.6417|± |0.0352|
| - bbh_cot_fewshot_date_understanding |Yaml |get-answer| 0|exact_match|0.7600|± |0.0271|
| - bbh_cot_fewshot_disambiguation_qa |Yaml |get-answer| 0|exact_match|0.7160|± |0.0286|
| - bbh_cot_fewshot_dyck_languages |Yaml |get-answer| 0|exact_match|0.1800|± |0.0243|
| - bbh_cot_fewshot_formal_fallacies |Yaml |get-answer| 0|exact_match|0.6520|± |0.0302|
| - bbh_cot_fewshot_geometric_shapes |Yaml |get-answer| 0|exact_match|0.3880|± |0.0309|
| - bbh_cot_fewshot_hyperbaton |Yaml |get-answer| 0|exact_match|0.9600|± |0.0124|
| - bbh_cot_fewshot_logical_deduction_five_objects |Yaml |get-answer| 0|exact_match|0.5360|± |0.0316|
| - bbh_cot_fewshot_logical_deduction_seven_objects |Yaml |get-answer| 0|exact_match|0.5040|± |0.0317|
| - bbh_cot_fewshot_logical_deduction_three_objects |Yaml |get-answer| 0|exact_match|0.8600|± |0.0220|
| - bbh_cot_fewshot_movie_recommendation |Yaml |get-answer| 0|exact_match|0.7840|± |0.0261|
| - bbh_cot_fewshot_multistep_arithmetic_two |Yaml |get-answer| 0|exact_match|0.6600|± |0.0300|
| - bbh_cot_fewshot_navigate |Yaml |get-answer| 0|exact_match|0.8160|± |0.0246|
| - bbh_cot_fewshot_object_counting |Yaml |get-answer| 0|exact_match|0.8360|± |0.0235|
| - bbh_cot_fewshot_penguins_in_a_table |Yaml |get-answer| 0|exact_match|0.7329|± |0.0367|
| - bbh_cot_fewshot_reasoning_about_colored_objects |Yaml |get-answer| 0|exact_match|0.8120|± |0.0248|
| - bbh_cot_fewshot_ruin_names |Yaml |get-answer| 0|exact_match|0.4440|± |0.0315|
| - bbh_cot_fewshot_salient_translation_error_detection |Yaml |get-answer| 0|exact_match|0.5200|± |0.0317|
| - bbh_cot_fewshot_snarks |Yaml |get-answer| 0|exact_match|0.7135|± |0.0340|
| - bbh_cot_fewshot_sports_understanding |Yaml |get-answer| 0|exact_match|0.9400|± |0.0151|
| - bbh_cot_fewshot_temporal_sequences |Yaml |get-answer| 0|exact_match|0.7560|± |0.0272|
| - bbh_cot_fewshot_tracking_shuffled_objects_five_objects |Yaml |get-answer| 0|exact_match|0.5680|± |0.0314|
| - bbh_cot_fewshot_tracking_shuffled_objects_seven_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_tracking_shuffled_objects_three_objects|Yaml |get-answer| 0|exact_match|0.6280|± |0.0306|
| - bbh_cot_fewshot_web_of_lies |Yaml |get-answer| 0|exact_match|0.9560|± |0.0130|
| - bbh_cot_fewshot_word_sorting |Yaml |get-answer| 0|exact_match|0.3800|± |0.0308|
|Groups|Version| Filter |n-shot| Metric |Value | |Stderr|
|------|-------|----------|-----:|-----------|-----:|---|-----:|
|bbh |N/A |get-answer| 0|exact_match|0.6752|± |0.1772|
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNAversal-8x7B-v1beta)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.78|
|AI2 Reasoning Challenge (25-Shot)|69.80|
|HellaSwag (10-Shot) |86.90|
|MMLU (5-Shot) |70.39|
|TruthfulQA (0-shot) |71.97|
|Winogrande (5-shot) |82.00|
|GSM8k (5-shot) |61.64|
| [
"TRANSLATION"
] | [
"PUBMEDQA",
"SCIQ"
] |
avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-final | avsolatorio | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1943715",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-14T19:44:29 | 2024-07-14T19:44:32 | 51 | 1 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1943715
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: who sang the song queen of my heart
sentences:
- Queen of My Heart Queen of My Heart "Queen of My Heart" is a song by Irish boy
band Westlife. It was released on 8 November 2001 as the first single from their
third studio album, "World of Our Own". It was released as a double A-side single
with "When You're Looking Like That" in UK and Ireland. It debuted at number one
on the UK Singles Chart, giving the band their ninth UK number one single in two
and a half years, staying at the top of the chart for one week. It remains one
of the band's most successful singles, becoming the
- Stephanie Edwards (Grey's Anatomy) Stephanie Edwards (Grey's Anatomy) Stephanie
Edwards, M.D. is a fictional character from the medical drama television series
"Grey's Anatomy", which airs on the American Broadcasting Company (ABC) in the
United States. The character was created by series producer Shonda Rhimes, and
was portrayed by actress Jerrika Hinton from 2012 to 2017. Introduced as a surgical
intern at the fictional Seattle Grace Mercy West Hospital, later renamed Grey
Sloan Memorial Hospital, Stephanie works her way up to resident level with fellow
intern and friend, Jo Wilson (Camilla Luddington). The character was described
by Hinton as "innovative" who strives to be the
- Heart of My Heart the 1926 song by Max, the Chief, and detect-o-tune operator
Arrick. Heart of My Heart "The Gang that Sang Heart of My Heart" is a popular
song. The music and lyrics were written by Ben Ryan (1892–1968) in 1926. It reminisces
about being in a youthful quartet, singing "Heart of My Heart". The quoted line,
"Heart of My Heart", so longed for in the 1926 song, begins the chorus of "The
Story of the Rose", written by Andrew Mack (1863–1931) in 1899. Mack was a popular
American actor, singer and comedian who reportedly first sang this song in an
1899
- source_sentence: when did gretsch stop making guitars in america
sentences:
- Get Low (Lil Jon & the East Side Boyz song) Get Low (Lil Jon & the East Side Boyz
song) "Get Low" is a song by Lil Jon & the East Side Boyz, featuring Ying Yang
Twins, released in 2003. It is featured on the 2002 album "Kings of Crunk". The
song reached number two on the US "Billboard" Hot 100 behind "Baby Boy" by Beyoncé
featuring Sean Paul and number 20 on the US Hot Digital Songs. It was number five
on the top Hot R&B/Hip-Hop songs of 2003. It is also known as a breakthrough single
for the crunk genre, as the song's success helped it become mainstream.
- TV Jones guitarist Brian Setzer, whose guitar sound relied heavily on vintage
Gretsch guitars. When the Gretsch Guitar Company was in the process of creating
a Brian Setzer signature model, Brian conducted a “blind sound test” of various
pickup models that were to be considered for use in these guitars. Tom's Hotrod
pickup design was chosen because of its sound being the most faithful to the original.
(At this point, the pickups Gretsch was using in their guitars were made of overseas
parts and ceramic magnets). Word soon spread that TV Jones was making “true-to-the-original”
Filter’tron pickups and many famous players demanded
- Gretsch South Carolina, where it remains today. The first new guitar model introduced
was the Traveling Wilburys model - an Asian import - which looked much like a
Danelectro. While this guitar model did little to bolster Gretsch's reputation
for producing classic guitars, it served notice that Gretsch was back. After numerous
failed attempts to acquire facilities or contract production in the United States,
Fred Gretsch and long-time Gretsch employee Duke Kramer, who advised Gretsch,
turned to Terada of Japan, and production began there. A range of reissues appeared
throughout the 1990s to mixed reviews. They were of generally high quality,
- source_sentence: 'Examining playfulness in adults: Testing its correlates with personality,
positive psychological functioning, goal aspirations, and multi-methodically assessed
ingenuity'
sentences:
- Implementation of Evolutionary Algorithms for Deep Architectures
- Chadwick Boseman Chadwick Boseman Chadwick Aaron Boseman (born November 29, 1976)
is an American actor, director, and producer known for his portrayals of real-life
historical figures such as Jackie Robinson in "42" (2013), James Brown in "Get
on Up" (2014) and Thurgood Marshall in "Marshall" (2017) and for his portrayal
of the superhero Black Panther in the Marvel Cinematic Universe films "" (2016),
"Black Panther" (2018), "" (2018) and the upcoming "" (2019). Boseman has also
had roles in the television series "Lincoln Heights" (2008) and "Persons Unknown"
(2010) and the films "The Express" (2008), "Draft Day" (2014) and "Message from
the
- 'Assessment of Play and Leisure: Delineation of the Problem'
- source_sentence: 1 in what part of italy was gelato first made
sentences:
- Domínguez Domínguez Domínguez is a name of Spanish origin. It used to mean "son
of Domingo" (i.e., son of Dominic). The surname is usually written Dominguez in
the Philippines and United States. Written as Domínguez in Spanish speaking countries
like Spain, Mexico, Argentina, etc... As of 2014, 40.7% of all known bearers of
the surname "Domínguez" were residents of Mexico (frequency 1:242), 12.8% of Spain
(1:288), 8.5% of Argentina (1:396), 7.7% of the United States (1:3,721), 4.3%
of Cuba (1:212), 3.2% of Colombia (1:1,186), 3.0% of Peru (1:831), 2.6% of Venezuela
(1:904), 2.6% of Honduras (1:265), 2.4% of Paraguay (1:241), 2.0%
- Frost Gelato to the taste of the ice cream they had in Italy concluding that the
only way to get gelato at the time was to make another trip to Italy. Thus both
owners searched for a way to make gelato in the United States eventually locating
a company that imports ingredients directly from Italy, after spending days studying
how to make gelato, the owners created their first batch and after sampling it
felt the tastes they had come across in Italy. Both owners wanted to share the
taste of gelato with their community and thus after a few months, Frost Gelato
- Gelato any way that ice cream is, including cup, cone, sandwich, cake, pie, or
on a stick. Gelato was invented by Buontalenti, in Florence (Tuscany), during
the Renaissance period. The Buontalenti created the dessert for the Grand Duke
Cosimo I de’ Medici, who wanted him to organize an opulent banquet to celebrate
the Spanish deputation. It was October 5, 1600, and Buontalenti had worked for
four months to prepare such a banquet. In Florence, most shops selling hand-made
ice-cream also usually offer a "Buontalenti" flavour. In 1686, the Sicilian fisherman
Francesco Procopio dei Coltelli perfected the first ice cream machine. However,
- source_sentence: who does george nelson represent in o brother where art thou
sentences:
- O Brother, Where Art Thou? the film got together and performed the music from
the film in a Down from the Mountain concert tour which was filmed for TV and
DVD. This included Ralph Stanley, John Hartford, Alison Krauss, Emmylou Harris,
Gillian Welch, Chris Sharp, and others. O Brother, Where Art Thou? O Brother,
Where Art Thou? is a 2000 crime comedy film written, produced, and directed by
Joel and Ethan Coen, and starring George Clooney, John Turturro, and Tim Blake
Nelson, with John Goodman, Holly Hunter, and Charles Durning in supporting roles.
The film is set in 1937 rural Mississippi during the Great Depression.
- O Brother, Where Art Thou? omitted all instances of the words "damn" and "hell"
from the Coens' script, which only became known to Clooney after the directors
pointed this out to him during shooting. This was the fourth film of the brothers
in which John Turturro has starred. Other actors in "O Brother, Where Art Thou?"
who had worked previously with the Coens include John Goodman (three films), Holly
Hunter (two), Michael Badalucco and Charles Durning (one film each). The Coens
used digital color correction to give the film a sepia-tinted look. Joel stated
this was because the actual set was "greener than Ireland". Cinematographer
- 'Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching
Movies and Reading Books'
model-index:
- name: all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
results:
- task:
type: triplet
name: Triplet
dataset:
name: medi mteb dev
type: medi-mteb-dev
metrics:
- type: cosine_accuracy
value: 0.9116536208878427
name: Cosine Accuracy
- type: dot_accuracy
value: 0.08101154961957414
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.9119820460890032
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.9114894082872625
name: Euclidean Accuracy
- type: max_accuracy
value: 0.9119820460890032
name: Max Accuracy
---
# all-MiniLM-L6-v2 trained on MEDI-MTEB triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the NQ, pubmed, specter_train_triples, S2ORC_citations_abstracts, fever, gooaq_pairs, codesearchnet, wikihow, WikiAnswers, eli5_question_answer, amazon-qa, medmcqa, zeroshot, TriviaQA_pairs, PAQ_pairs, stackexchange_duplicate_questions_title-body_title-body, trex, flickr30k_captions, hotpotqa, task671_ambigqa_text_generation, task061_ropes_answer_generation, task285_imdb_answer_generation, task905_hate_speech_offensive_classification, task566_circa_classification, task184_snli_entailment_to_neutral_text_modification, task280_stereoset_classification_stereotype_type, task1599_smcalflow_classification, task1384_deal_or_no_dialog_classification, task591_sciq_answer_generation, task823_peixian-rtgender_sentiment_analysis, task023_cosmosqa_question_generation, task900_freebase_qa_category_classification, task924_event2mind_word_generation, task152_tomqa_find_location_easy_noise, task1368_healthfact_sentence_generation, task1661_super_glue_classification, task1187_politifact_classification, task1728_web_nlg_data_to_text, task112_asset_simple_sentence_identification, task1340_msr_text_compression_compression, task072_abductivenli_answer_generation, task1504_hatexplain_answer_generation, task684_online_privacy_policy_text_information_type_generation, task1290_xsum_summarization, task075_squad1.1_answer_generation, task1587_scifact_classification, task384_socialiqa_question_classification, task1555_scitail_answer_generation, task1532_daily_dialog_emotion_classification, task239_tweetqa_answer_generation, task596_mocha_question_generation, task1411_dart_subject_identification, task1359_numer_sense_answer_generation, task329_gap_classification, task220_rocstories_title_classification, task316_crows-pairs_classification_stereotype, task495_semeval_headline_classification, task1168_brown_coarse_pos_tagging, task348_squad2.0_unanswerable_question_generation, task049_multirc_questions_needed_to_answer, task1534_daily_dialog_question_classification, task322_jigsaw_classification_threat, task295_semeval_2020_task4_commonsense_reasoning, task186_snli_contradiction_to_entailment_text_modification, task034_winogrande_question_modification_object, task160_replace_letter_in_a_sentence, task469_mrqa_answer_generation, task105_story_cloze-rocstories_sentence_generation, task649_race_blank_question_generation, task1536_daily_dialog_happiness_classification, task683_online_privacy_policy_text_purpose_answer_generation, task024_cosmosqa_answer_generation, task584_udeps_eng_fine_pos_tagging, task066_timetravel_binary_consistency_classification, task413_mickey_en_sentence_perturbation_generation, task182_duorc_question_generation, task028_drop_answer_generation, task1601_webquestions_answer_generation, task1295_adversarial_qa_question_answering, task201_mnli_neutral_classification, task038_qasc_combined_fact, task293_storycommonsense_emotion_text_generation, task572_recipe_nlg_text_generation, task517_emo_classify_emotion_of_dialogue, task382_hybridqa_answer_generation, task176_break_decompose_questions, task1291_multi_news_summarization, task155_count_nouns_verbs, task031_winogrande_question_generation_object, task279_stereoset_classification_stereotype, task1336_peixian_equity_evaluation_corpus_gender_classifier, task508_scruples_dilemmas_more_ethical_isidentifiable, task518_emo_different_dialogue_emotions, task077_splash_explanation_to_sql, task923_event2mind_classifier, task470_mrqa_question_generation, task638_multi_woz_classification, task1412_web_questions_question_answering, task847_pubmedqa_question_generation, task678_ollie_actual_relationship_answer_generation, task290_tellmewhy_question_answerability, task575_air_dialogue_classification, task189_snli_neutral_to_contradiction_text_modification, task026_drop_question_generation, task162_count_words_starting_with_letter, task079_conala_concat_strings, task610_conllpp_ner, task046_miscellaneous_question_typing, task197_mnli_domain_answer_generation, task1325_qa_zre_question_generation_on_subject_relation, task430_senteval_subject_count, task672_nummersense, task402_grailqa_paraphrase_generation, task904_hate_speech_offensive_classification, task192_hotpotqa_sentence_generation, task069_abductivenli_classification, task574_air_dialogue_sentence_generation, task187_snli_entailment_to_contradiction_text_modification, task749_glucose_reverse_cause_emotion_detection, task1552_scitail_question_generation, task750_aqua_multiple_choice_answering, task327_jigsaw_classification_toxic, task1502_hatexplain_classification, task328_jigsaw_classification_insult, task304_numeric_fused_head_resolution, task1293_kilt_tasks_hotpotqa_question_answering, task216_rocstories_correct_answer_generation, task1326_qa_zre_question_generation_from_answer, task1338_peixian_equity_evaluation_corpus_sentiment_classifier, task1729_personachat_generate_next, task1202_atomic_classification_xneed, task400_paws_paraphrase_classification, task502_scruples_anecdotes_whoiswrong_verification, task088_identify_typo_verification, task221_rocstories_two_choice_classification, task200_mnli_entailment_classification, task074_squad1.1_question_generation, task581_socialiqa_question_generation, task1186_nne_hrngo_classification, task898_freebase_qa_answer_generation, task1408_dart_similarity_classification, task168_strategyqa_question_decomposition, task1357_xlsum_summary_generation, task390_torque_text_span_selection, task165_mcscript_question_answering_commonsense, task1533_daily_dialog_formal_classification, task002_quoref_answer_generation, task1297_qasc_question_answering, task305_jeopardy_answer_generation_normal, task029_winogrande_full_object, task1327_qa_zre_answer_generation_from_question, task326_jigsaw_classification_obscene, task1542_every_ith_element_from_starting, task570_recipe_nlg_ner_generation, task1409_dart_text_generation, task401_numeric_fused_head_reference, task846_pubmedqa_classification, task1712_poki_classification, task344_hybridqa_answer_generation, task875_emotion_classification, task1214_atomic_classification_xwant, task106_scruples_ethical_judgment, task238_iirc_answer_from_passage_answer_generation, task1391_winogrande_easy_answer_generation, task195_sentiment140_classification, task163_count_words_ending_with_letter, task579_socialiqa_classification, task569_recipe_nlg_text_generation, task1602_webquestion_question_genreation, task747_glucose_cause_emotion_detection, task219_rocstories_title_answer_generation, task178_quartz_question_answering, task103_facts2story_long_text_generation, task301_record_question_generation, task1369_healthfact_sentence_generation, task515_senteval_odd_word_out, task496_semeval_answer_generation, task1658_billsum_summarization, task1204_atomic_classification_hinderedby, task1392_superglue_multirc_answer_verification, task306_jeopardy_answer_generation_double, task1286_openbookqa_question_answering, task159_check_frequency_of_words_in_sentence_pair, task151_tomqa_find_location_easy_clean, task323_jigsaw_classification_sexually_explicit, task037_qasc_generate_related_fact, task027_drop_answer_type_generation, task1596_event2mind_text_generation_2, task141_odd-man-out_classification_category, task194_duorc_answer_generation, task679_hope_edi_english_text_classification, task246_dream_question_generation, task1195_disflqa_disfluent_to_fluent_conversion, task065_timetravel_consistent_sentence_classification, task351_winomt_classification_gender_identifiability_anti, task580_socialiqa_answer_generation, task583_udeps_eng_coarse_pos_tagging, task202_mnli_contradiction_classification, task222_rocstories_two_chioce_slotting_classification, task498_scruples_anecdotes_whoiswrong_classification, task067_abductivenli_answer_generation, task616_cola_classification, task286_olid_offense_judgment, task188_snli_neutral_to_entailment_text_modification, task223_quartz_explanation_generation, task820_protoqa_answer_generation, task196_sentiment140_answer_generation, task1678_mathqa_answer_selection, task349_squad2.0_answerable_unanswerable_question_classification, task154_tomqa_find_location_hard_noise, task333_hateeval_classification_hate_en, task235_iirc_question_from_subtext_answer_generation, task1554_scitail_classification, task210_logic2text_structured_text_generation, task035_winogrande_question_modification_person, task230_iirc_passage_classification, task1356_xlsum_title_generation, task1726_mathqa_correct_answer_generation, task302_record_classification, task380_boolq_yes_no_question, task212_logic2text_classification, task748_glucose_reverse_cause_event_detection, task834_mathdataset_classification, task350_winomt_classification_gender_identifiability_pro, task191_hotpotqa_question_generation, task236_iirc_question_from_passage_answer_generation, task217_rocstories_ordering_answer_generation, task568_circa_question_generation, task614_glucose_cause_event_detection, task361_spolin_yesand_prompt_response_classification, task421_persent_sentence_sentiment_classification, task203_mnli_sentence_generation, task420_persent_document_sentiment_classification, task153_tomqa_find_location_hard_clean, task346_hybridqa_classification, task1211_atomic_classification_hassubevent, task360_spolin_yesand_response_generation, task510_reddit_tifu_title_summarization, task511_reddit_tifu_long_text_summarization, task345_hybridqa_answer_generation, task270_csrg_counterfactual_context_generation, task307_jeopardy_answer_generation_final, task001_quoref_question_generation, task089_swap_words_verification, task1196_atomic_classification_oeffect, task080_piqa_answer_generation, task1598_nyc_long_text_generation, task240_tweetqa_question_generation, task615_moviesqa_answer_generation, task1347_glue_sts-b_similarity_classification, task114_is_the_given_word_longest, task292_storycommonsense_character_text_generation, task115_help_advice_classification, task431_senteval_object_count, task1360_numer_sense_multiple_choice_qa_generation, task177_para-nmt_paraphrasing, task132_dais_text_modification, task269_csrg_counterfactual_story_generation, task233_iirc_link_exists_classification, task161_count_words_containing_letter, task1205_atomic_classification_isafter, task571_recipe_nlg_ner_generation, task1292_yelp_review_full_text_categorization, task428_senteval_inversion, task311_race_question_generation, task429_senteval_tense, task403_creak_commonsense_inference, task929_products_reviews_classification, task582_naturalquestion_answer_generation, task237_iirc_answer_from_subtext_answer_generation, task050_multirc_answerability, task184_break_generate_question, task669_ambigqa_answer_generation, task169_strategyqa_sentence_generation, task500_scruples_anecdotes_title_generation, task241_tweetqa_classification, task1345_glue_qqp_question_paraprashing, task218_rocstories_swap_order_answer_generation, task613_politifact_text_generation, task1167_penn_treebank_coarse_pos_tagging, task1422_mathqa_physics, task247_dream_answer_generation, task199_mnli_classification, task164_mcscript_question_answering_text, task1541_agnews_classification, task516_senteval_conjoints_inversion, task294_storycommonsense_motiv_text_generation, task501_scruples_anecdotes_post_type_verification, task213_rocstories_correct_ending_classification, task821_protoqa_question_generation, task493_review_polarity_classification, task308_jeopardy_answer_generation_all, task1595_event2mind_text_generation_1, task040_qasc_question_generation, task231_iirc_link_classification, task1727_wiqa_what_is_the_effect, task578_curiosity_dialogs_answer_generation, task310_race_classification, task309_race_answer_generation, task379_agnews_topic_classification, task030_winogrande_full_person, task1540_parsed_pdfs_summarization, task039_qasc_find_overlapping_words, task1206_atomic_classification_isbefore, task157_count_vowels_and_consonants, task339_record_answer_generation, task453_swag_answer_generation, task848_pubmedqa_classification, task673_google_wellformed_query_classification, task676_ollie_relationship_answer_generation, task268_casehold_legal_answer_generation, task844_financial_phrasebank_classification, task330_gap_answer_generation, task595_mocha_answer_generation, task1285_kpa_keypoint_matching, task234_iirc_passage_line_answer_generation, task494_review_polarity_answer_generation, task670_ambigqa_question_generation, task289_gigaword_summarization, npr, nli, SimpleWiki, amazon_review_2018, ccnews_title_text, agnews, xsum, msmarco, yahoo_answers_title_answer, squad_pairs, wow, mteb-amazon_counterfactual-avs_triplets, mteb-amazon_massive_intent-avs_triplets, mteb-amazon_massive_scenario-avs_triplets, mteb-amazon_reviews_multi-avs_triplets, mteb-banking77-avs_triplets, mteb-emotion-avs_triplets, mteb-imdb-avs_triplets, mteb-mtop_domain-avs_triplets, mteb-mtop_intent-avs_triplets, mteb-toxic_conversations_50k-avs_triplets, mteb-tweet_sentiment_extraction-avs_triplets and covid-bing-query-gpt4-avs_triplets datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- NQ
- pubmed
- specter_train_triples
- S2ORC_citations_abstracts
- fever
- gooaq_pairs
- codesearchnet
- wikihow
- WikiAnswers
- eli5_question_answer
- amazon-qa
- medmcqa
- zeroshot
- TriviaQA_pairs
- PAQ_pairs
- stackexchange_duplicate_questions_title-body_title-body
- trex
- flickr30k_captions
- hotpotqa
- task671_ambigqa_text_generation
- task061_ropes_answer_generation
- task285_imdb_answer_generation
- task905_hate_speech_offensive_classification
- task566_circa_classification
- task184_snli_entailment_to_neutral_text_modification
- task280_stereoset_classification_stereotype_type
- task1599_smcalflow_classification
- task1384_deal_or_no_dialog_classification
- task591_sciq_answer_generation
- task823_peixian-rtgender_sentiment_analysis
- task023_cosmosqa_question_generation
- task900_freebase_qa_category_classification
- task924_event2mind_word_generation
- task152_tomqa_find_location_easy_noise
- task1368_healthfact_sentence_generation
- task1661_super_glue_classification
- task1187_politifact_classification
- task1728_web_nlg_data_to_text
- task112_asset_simple_sentence_identification
- task1340_msr_text_compression_compression
- task072_abductivenli_answer_generation
- task1504_hatexplain_answer_generation
- task684_online_privacy_policy_text_information_type_generation
- task1290_xsum_summarization
- task075_squad1.1_answer_generation
- task1587_scifact_classification
- task384_socialiqa_question_classification
- task1555_scitail_answer_generation
- task1532_daily_dialog_emotion_classification
- task239_tweetqa_answer_generation
- task596_mocha_question_generation
- task1411_dart_subject_identification
- task1359_numer_sense_answer_generation
- task329_gap_classification
- task220_rocstories_title_classification
- task316_crows-pairs_classification_stereotype
- task495_semeval_headline_classification
- task1168_brown_coarse_pos_tagging
- task348_squad2.0_unanswerable_question_generation
- task049_multirc_questions_needed_to_answer
- task1534_daily_dialog_question_classification
- task322_jigsaw_classification_threat
- task295_semeval_2020_task4_commonsense_reasoning
- task186_snli_contradiction_to_entailment_text_modification
- task034_winogrande_question_modification_object
- task160_replace_letter_in_a_sentence
- task469_mrqa_answer_generation
- task105_story_cloze-rocstories_sentence_generation
- task649_race_blank_question_generation
- task1536_daily_dialog_happiness_classification
- task683_online_privacy_policy_text_purpose_answer_generation
- task024_cosmosqa_answer_generation
- task584_udeps_eng_fine_pos_tagging
- task066_timetravel_binary_consistency_classification
- task413_mickey_en_sentence_perturbation_generation
- task182_duorc_question_generation
- task028_drop_answer_generation
- task1601_webquestions_answer_generation
- task1295_adversarial_qa_question_answering
- task201_mnli_neutral_classification
- task038_qasc_combined_fact
- task293_storycommonsense_emotion_text_generation
- task572_recipe_nlg_text_generation
- task517_emo_classify_emotion_of_dialogue
- task382_hybridqa_answer_generation
- task176_break_decompose_questions
- task1291_multi_news_summarization
- task155_count_nouns_verbs
- task031_winogrande_question_generation_object
- task279_stereoset_classification_stereotype
- task1336_peixian_equity_evaluation_corpus_gender_classifier
- task508_scruples_dilemmas_more_ethical_isidentifiable
- task518_emo_different_dialogue_emotions
- task077_splash_explanation_to_sql
- task923_event2mind_classifier
- task470_mrqa_question_generation
- task638_multi_woz_classification
- task1412_web_questions_question_answering
- task847_pubmedqa_question_generation
- task678_ollie_actual_relationship_answer_generation
- task290_tellmewhy_question_answerability
- task575_air_dialogue_classification
- task189_snli_neutral_to_contradiction_text_modification
- task026_drop_question_generation
- task162_count_words_starting_with_letter
- task079_conala_concat_strings
- task610_conllpp_ner
- task046_miscellaneous_question_typing
- task197_mnli_domain_answer_generation
- task1325_qa_zre_question_generation_on_subject_relation
- task430_senteval_subject_count
- task672_nummersense
- task402_grailqa_paraphrase_generation
- task904_hate_speech_offensive_classification
- task192_hotpotqa_sentence_generation
- task069_abductivenli_classification
- task574_air_dialogue_sentence_generation
- task187_snli_entailment_to_contradiction_text_modification
- task749_glucose_reverse_cause_emotion_detection
- task1552_scitail_question_generation
- task750_aqua_multiple_choice_answering
- task327_jigsaw_classification_toxic
- task1502_hatexplain_classification
- task328_jigsaw_classification_insult
- task304_numeric_fused_head_resolution
- task1293_kilt_tasks_hotpotqa_question_answering
- task216_rocstories_correct_answer_generation
- task1326_qa_zre_question_generation_from_answer
- task1338_peixian_equity_evaluation_corpus_sentiment_classifier
- task1729_personachat_generate_next
- task1202_atomic_classification_xneed
- task400_paws_paraphrase_classification
- task502_scruples_anecdotes_whoiswrong_verification
- task088_identify_typo_verification
- task221_rocstories_two_choice_classification
- task200_mnli_entailment_classification
- task074_squad1.1_question_generation
- task581_socialiqa_question_generation
- task1186_nne_hrngo_classification
- task898_freebase_qa_answer_generation
- task1408_dart_similarity_classification
- task168_strategyqa_question_decomposition
- task1357_xlsum_summary_generation
- task390_torque_text_span_selection
- task165_mcscript_question_answering_commonsense
- task1533_daily_dialog_formal_classification
- task002_quoref_answer_generation
- task1297_qasc_question_answering
- task305_jeopardy_answer_generation_normal
- task029_winogrande_full_object
- task1327_qa_zre_answer_generation_from_question
- task326_jigsaw_classification_obscene
- task1542_every_ith_element_from_starting
- task570_recipe_nlg_ner_generation
- task1409_dart_text_generation
- task401_numeric_fused_head_reference
- task846_pubmedqa_classification
- task1712_poki_classification
- task344_hybridqa_answer_generation
- task875_emotion_classification
- task1214_atomic_classification_xwant
- task106_scruples_ethical_judgment
- task238_iirc_answer_from_passage_answer_generation
- task1391_winogrande_easy_answer_generation
- task195_sentiment140_classification
- task163_count_words_ending_with_letter
- task579_socialiqa_classification
- task569_recipe_nlg_text_generation
- task1602_webquestion_question_genreation
- task747_glucose_cause_emotion_detection
- task219_rocstories_title_answer_generation
- task178_quartz_question_answering
- task103_facts2story_long_text_generation
- task301_record_question_generation
- task1369_healthfact_sentence_generation
- task515_senteval_odd_word_out
- task496_semeval_answer_generation
- task1658_billsum_summarization
- task1204_atomic_classification_hinderedby
- task1392_superglue_multirc_answer_verification
- task306_jeopardy_answer_generation_double
- task1286_openbookqa_question_answering
- task159_check_frequency_of_words_in_sentence_pair
- task151_tomqa_find_location_easy_clean
- task323_jigsaw_classification_sexually_explicit
- task037_qasc_generate_related_fact
- task027_drop_answer_type_generation
- task1596_event2mind_text_generation_2
- task141_odd-man-out_classification_category
- task194_duorc_answer_generation
- task679_hope_edi_english_text_classification
- task246_dream_question_generation
- task1195_disflqa_disfluent_to_fluent_conversion
- task065_timetravel_consistent_sentence_classification
- task351_winomt_classification_gender_identifiability_anti
- task580_socialiqa_answer_generation
- task583_udeps_eng_coarse_pos_tagging
- task202_mnli_contradiction_classification
- task222_rocstories_two_chioce_slotting_classification
- task498_scruples_anecdotes_whoiswrong_classification
- task067_abductivenli_answer_generation
- task616_cola_classification
- task286_olid_offense_judgment
- task188_snli_neutral_to_entailment_text_modification
- task223_quartz_explanation_generation
- task820_protoqa_answer_generation
- task196_sentiment140_answer_generation
- task1678_mathqa_answer_selection
- task349_squad2.0_answerable_unanswerable_question_classification
- task154_tomqa_find_location_hard_noise
- task333_hateeval_classification_hate_en
- task235_iirc_question_from_subtext_answer_generation
- task1554_scitail_classification
- task210_logic2text_structured_text_generation
- task035_winogrande_question_modification_person
- task230_iirc_passage_classification
- task1356_xlsum_title_generation
- task1726_mathqa_correct_answer_generation
- task302_record_classification
- task380_boolq_yes_no_question
- task212_logic2text_classification
- task748_glucose_reverse_cause_event_detection
- task834_mathdataset_classification
- task350_winomt_classification_gender_identifiability_pro
- task191_hotpotqa_question_generation
- task236_iirc_question_from_passage_answer_generation
- task217_rocstories_ordering_answer_generation
- task568_circa_question_generation
- task614_glucose_cause_event_detection
- task361_spolin_yesand_prompt_response_classification
- task421_persent_sentence_sentiment_classification
- task203_mnli_sentence_generation
- task420_persent_document_sentiment_classification
- task153_tomqa_find_location_hard_clean
- task346_hybridqa_classification
- task1211_atomic_classification_hassubevent
- task360_spolin_yesand_response_generation
- task510_reddit_tifu_title_summarization
- task511_reddit_tifu_long_text_summarization
- task345_hybridqa_answer_generation
- task270_csrg_counterfactual_context_generation
- task307_jeopardy_answer_generation_final
- task001_quoref_question_generation
- task089_swap_words_verification
- task1196_atomic_classification_oeffect
- task080_piqa_answer_generation
- task1598_nyc_long_text_generation
- task240_tweetqa_question_generation
- task615_moviesqa_answer_generation
- task1347_glue_sts-b_similarity_classification
- task114_is_the_given_word_longest
- task292_storycommonsense_character_text_generation
- task115_help_advice_classification
- task431_senteval_object_count
- task1360_numer_sense_multiple_choice_qa_generation
- task177_para-nmt_paraphrasing
- task132_dais_text_modification
- task269_csrg_counterfactual_story_generation
- task233_iirc_link_exists_classification
- task161_count_words_containing_letter
- task1205_atomic_classification_isafter
- task571_recipe_nlg_ner_generation
- task1292_yelp_review_full_text_categorization
- task428_senteval_inversion
- task311_race_question_generation
- task429_senteval_tense
- task403_creak_commonsense_inference
- task929_products_reviews_classification
- task582_naturalquestion_answer_generation
- task237_iirc_answer_from_subtext_answer_generation
- task050_multirc_answerability
- task184_break_generate_question
- task669_ambigqa_answer_generation
- task169_strategyqa_sentence_generation
- task500_scruples_anecdotes_title_generation
- task241_tweetqa_classification
- task1345_glue_qqp_question_paraprashing
- task218_rocstories_swap_order_answer_generation
- task613_politifact_text_generation
- task1167_penn_treebank_coarse_pos_tagging
- task1422_mathqa_physics
- task247_dream_answer_generation
- task199_mnli_classification
- task164_mcscript_question_answering_text
- task1541_agnews_classification
- task516_senteval_conjoints_inversion
- task294_storycommonsense_motiv_text_generation
- task501_scruples_anecdotes_post_type_verification
- task213_rocstories_correct_ending_classification
- task821_protoqa_question_generation
- task493_review_polarity_classification
- task308_jeopardy_answer_generation_all
- task1595_event2mind_text_generation_1
- task040_qasc_question_generation
- task231_iirc_link_classification
- task1727_wiqa_what_is_the_effect
- task578_curiosity_dialogs_answer_generation
- task310_race_classification
- task309_race_answer_generation
- task379_agnews_topic_classification
- task030_winogrande_full_person
- task1540_parsed_pdfs_summarization
- task039_qasc_find_overlapping_words
- task1206_atomic_classification_isbefore
- task157_count_vowels_and_consonants
- task339_record_answer_generation
- task453_swag_answer_generation
- task848_pubmedqa_classification
- task673_google_wellformed_query_classification
- task676_ollie_relationship_answer_generation
- task268_casehold_legal_answer_generation
- task844_financial_phrasebank_classification
- task330_gap_answer_generation
- task595_mocha_answer_generation
- task1285_kpa_keypoint_matching
- task234_iirc_passage_line_answer_generation
- task494_review_polarity_answer_generation
- task670_ambigqa_question_generation
- task289_gigaword_summarization
- npr
- nli
- SimpleWiki
- amazon_review_2018
- ccnews_title_text
- agnews
- xsum
- msmarco
- yahoo_answers_title_answer
- squad_pairs
- wow
- mteb-amazon_counterfactual-avs_triplets
- mteb-amazon_massive_intent-avs_triplets
- mteb-amazon_massive_scenario-avs_triplets
- mteb-amazon_reviews_multi-avs_triplets
- mteb-banking77-avs_triplets
- mteb-emotion-avs_triplets
- mteb-imdb-avs_triplets
- mteb-mtop_domain-avs_triplets
- mteb-mtop_intent-avs_triplets
- mteb-toxic_conversations_50k-avs_triplets
- mteb-tweet_sentiment_extraction-avs_triplets
- covid-bing-query-gpt4-avs_triplets
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("avsolatorio/all-MiniLM-L6-v2-MEDI-MTEB-triplet-final")
# Run inference
sentences = [
'who does george nelson represent in o brother where art thou',
'O Brother, Where Art Thou? omitted all instances of the words "damn" and "hell" from the Coens\' script, which only became known to Clooney after the directors pointed this out to him during shooting. This was the fourth film of the brothers in which John Turturro has starred. Other actors in "O Brother, Where Art Thou?" who had worked previously with the Coens include John Goodman (three films), Holly Hunter (two), Michael Badalucco and Charles Durning (one film each). The Coens used digital color correction to give the film a sepia-tinted look. Joel stated this was because the actual set was "greener than Ireland". Cinematographer',
'O Brother, Where Art Thou? the film got together and performed the music from the film in a Down from the Mountain concert tour which was filmed for TV and DVD. This included Ralph Stanley, John Hartford, Alison Krauss, Emmylou Harris, Gillian Welch, Chris Sharp, and others. O Brother, Where Art Thou? O Brother, Where Art Thou? is a 2000 crime comedy film written, produced, and directed by Joel and Ethan Coen, and starring George Clooney, John Turturro, and Tim Blake Nelson, with John Goodman, Holly Hunter, and Charles Durning in supporting roles. The film is set in 1937 rural Mississippi during the Great Depression.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `medi-mteb-dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:----------|
| cosine_accuracy | 0.9117 |
| dot_accuracy | 0.081 |
| manhattan_accuracy | 0.912 |
| euclidean_accuracy | 0.9115 |
| **max_accuracy** | **0.912** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### NQ
* Dataset: NQ
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.91 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 111 tokens</li><li>mean: 137.95 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 138.79 tokens</li><li>max: 209 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### pubmed
* Dataset: pubmed
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 22.81 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 93 tokens</li><li>mean: 240.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 73 tokens</li><li>mean: 239.5 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### specter_train_triples
* Dataset: specter_train_triples
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 15.69 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.12 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 16.39 tokens</li><li>max: 64 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### S2ORC_citations_abstracts
* Dataset: S2ORC_citations_abstracts
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 196.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 203.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 208.09 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### fever
* Dataset: fever
* Size: 74,514 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.49 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 112.67 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 113.92 tokens</li><li>max: 163 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gooaq_pairs
* Dataset: gooaq_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.92 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.11 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 63.73 tokens</li><li>max: 150 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### codesearchnet
* Dataset: codesearchnet
* Size: 15,210 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 28.96 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 134.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 163.95 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wikihow
* Dataset: wikihow
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.05 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 45.27 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 35.68 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### WikiAnswers
* Dataset: WikiAnswers
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 12.79 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 12.93 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.13 tokens</li><li>max: 44 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### eli5_question_answer
* Dataset: eli5_question_answer
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.16 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 100.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 112.62 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon-qa
* Dataset: amazon-qa
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 23.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 52.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 62.09 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### medmcqa
* Dataset: medmcqa
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 19.62 tokens</li><li>max: 167 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 110.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 111.99 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### zeroshot
* Dataset: zeroshot
* Size: 15,210 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 8.7 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 112.73 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 115.71 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### TriviaQA_pairs
* Dataset: TriviaQA_pairs
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.22 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 246.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 232.19 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### PAQ_pairs
* Dataset: PAQ_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 12.6 tokens</li><li>max: 22 tokens</li></ul> | <ul><li>min: 112 tokens</li><li>mean: 136.78 tokens</li><li>max: 205 tokens</li></ul> | <ul><li>min: 110 tokens</li><li>mean: 135.66 tokens</li><li>max: 254 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### stackexchange_duplicate_questions_title-body_title-body
* Dataset: stackexchange_duplicate_questions_title-body_title-body
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 150.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 142.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 198.29 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### trex
* Dataset: trex
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 9.55 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 104.71 tokens</li><li>max: 212 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 118.22 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### flickr30k_captions
* Dataset: flickr30k_captions
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.95 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.68 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.15 tokens</li><li>max: 52 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### hotpotqa
* Dataset: hotpotqa
* Size: 40,048 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 23.83 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 113.6 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 115.33 tokens</li><li>max: 178 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task671_ambigqa_text_generation
* Dataset: task671_ambigqa_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 12.69 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.52 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.23 tokens</li><li>max: 19 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task061_ropes_answer_generation
* Dataset: task061_ropes_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 117 tokens</li><li>mean: 208.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 117 tokens</li><li>mean: 208.27 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 119 tokens</li><li>mean: 210.46 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task285_imdb_answer_generation
* Dataset: task285_imdb_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 46 tokens</li><li>mean: 208.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 203.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 208.78 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task905_hate_speech_offensive_classification
* Dataset: task905_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 41.73 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 40.48 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 32.23 tokens</li><li>max: 135 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task566_circa_classification
* Dataset: task566_circa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 27.77 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.22 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 27.46 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_snli_entailment_to_neutral_text_modification
* Dataset: task184_snli_entailment_to_neutral_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 29.98 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 28.9 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 30.33 tokens</li><li>max: 100 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task280_stereoset_classification_stereotype_type
* Dataset: task280_stereoset_classification_stereotype_type
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 18.47 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.89 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 16.86 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1599_smcalflow_classification
* Dataset: task1599_smcalflow_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 11.25 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.47 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.12 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1384_deal_or_no_dialog_classification
* Dataset: task1384_deal_or_no_dialog_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 59.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 59.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 58.47 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task591_sciq_answer_generation
* Dataset: task591_sciq_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.61 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.17 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.67 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task823_peixian-rtgender_sentiment_analysis
* Dataset: task823_peixian-rtgender_sentiment_analysis
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 57.26 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 60.03 tokens</li><li>max: 153 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 60.89 tokens</li><li>max: 169 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task023_cosmosqa_question_generation
* Dataset: task023_cosmosqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 79.52 tokens</li><li>max: 159 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 80.36 tokens</li><li>max: 165 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 79.14 tokens</li><li>max: 161 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task900_freebase_qa_category_classification
* Dataset: task900_freebase_qa_category_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.44 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.33 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.14 tokens</li><li>max: 69 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task924_event2mind_word_generation
* Dataset: task924_event2mind_word_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 32.06 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.13 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.58 tokens</li><li>max: 68 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task152_tomqa_find_location_easy_noise
* Dataset: task152_tomqa_find_location_easy_noise
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 52.96 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.53 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 52.92 tokens</li><li>max: 82 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1368_healthfact_sentence_generation
* Dataset: task1368_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 91 tokens</li><li>mean: 240.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 239.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 97 tokens</li><li>mean: 245.05 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1661_super_glue_classification
* Dataset: task1661_super_glue_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 140.99 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 142.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 143.37 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1187_politifact_classification
* Dataset: task1187_politifact_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 33.28 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 31.59 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.9 tokens</li><li>max: 71 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1728_web_nlg_data_to_text
* Dataset: task1728_web_nlg_data_to_text
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 43.07 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 46.55 tokens</li><li>max: 152 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 43.18 tokens</li><li>max: 152 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task112_asset_simple_sentence_identification
* Dataset: task112_asset_simple_sentence_identification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 51.87 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.68 tokens</li><li>max: 144 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 51.93 tokens</li><li>max: 114 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1340_msr_text_compression_compression
* Dataset: task1340_msr_text_compression_compression
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 41.77 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 44.27 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.08 tokens</li><li>max: 141 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task072_abductivenli_answer_generation
* Dataset: task072_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 26.8 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.15 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 26.4 tokens</li><li>max: 55 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1504_hatexplain_answer_generation
* Dataset: task1504_hatexplain_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 28.53 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 24.21 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.94 tokens</li><li>max: 67 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task684_online_privacy_policy_text_information_type_generation
* Dataset: task684_online_privacy_policy_text_information_type_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 29.91 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.18 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 30.06 tokens</li><li>max: 68 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1290_xsum_summarization
* Dataset: task1290_xsum_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 226.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 229.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 229.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task075_squad1.1_answer_generation
* Dataset: task075_squad1.1_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 167.12 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 173.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 178.89 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1587_scifact_classification
* Dataset: task1587_scifact_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 88 tokens</li><li>mean: 242.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 90 tokens</li><li>mean: 246.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 244.36 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task384_socialiqa_question_classification
* Dataset: task384_socialiqa_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 35.46 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.33 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.52 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1555_scitail_answer_generation
* Dataset: task1555_scitail_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 36.88 tokens</li><li>max: 90 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.12 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 36.59 tokens</li><li>max: 92 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1532_daily_dialog_emotion_classification
* Dataset: task1532_daily_dialog_emotion_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 135.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 140.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 134.53 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task239_tweetqa_answer_generation
* Dataset: task239_tweetqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 56.05 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 56.59 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 56.05 tokens</li><li>max: 81 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task596_mocha_question_generation
* Dataset: task596_mocha_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 80.75 tokens</li><li>max: 163 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 96.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 45.02 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1411_dart_subject_identification
* Dataset: task1411_dart_subject_identification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.01 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.1 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.36 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1359_numer_sense_answer_generation
* Dataset: task1359_numer_sense_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 18.75 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.43 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 18.3 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task329_gap_classification
* Dataset: task329_gap_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 40 tokens</li><li>mean: 123.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 127.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 128.35 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task220_rocstories_title_classification
* Dataset: task220_rocstories_title_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 53 tokens</li><li>mean: 80.81 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 81.14 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 79.79 tokens</li><li>max: 115 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task316_crows-pairs_classification_stereotype
* Dataset: task316_crows-pairs_classification_stereotype
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.78 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.35 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.82 tokens</li><li>max: 52 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task495_semeval_headline_classification
* Dataset: task495_semeval_headline_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 24.57 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.23 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 24.2 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1168_brown_coarse_pos_tagging
* Dataset: task1168_brown_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 43.83 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 43.44 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.95 tokens</li><li>max: 197 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task348_squad2.0_unanswerable_question_generation
* Dataset: task348_squad2.0_unanswerable_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 153.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 161.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 167.06 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task049_multirc_questions_needed_to_answer
* Dataset: task049_multirc_questions_needed_to_answer
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 174 tokens</li><li>mean: 252.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 169 tokens</li><li>mean: 252.57 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 178 tokens</li><li>mean: 252.73 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1534_daily_dialog_question_classification
* Dataset: task1534_daily_dialog_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 125.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 130.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 135.56 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task322_jigsaw_classification_threat
* Dataset: task322_jigsaw_classification_threat
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 54.84 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 62.09 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 62.43 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task295_semeval_2020_task4_commonsense_reasoning
* Dataset: task295_semeval_2020_task4_commonsense_reasoning
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 44.81 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 45.07 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 44.7 tokens</li><li>max: 88 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task186_snli_contradiction_to_entailment_text_modification
* Dataset: task186_snli_contradiction_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.21 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.13 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.21 tokens</li><li>max: 67 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task034_winogrande_question_modification_object
* Dataset: task034_winogrande_question_modification_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 36.36 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 35.59 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 34.87 tokens</li><li>max: 55 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task160_replace_letter_in_a_sentence
* Dataset: task160_replace_letter_in_a_sentence
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 31.98 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.78 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 31.8 tokens</li><li>max: 48 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task469_mrqa_answer_generation
* Dataset: task469_mrqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 182.22 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 180.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 184.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task105_story_cloze-rocstories_sentence_generation
* Dataset: task105_story_cloze-rocstories_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 36 tokens</li><li>mean: 55.58 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 54.96 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 55.99 tokens</li><li>max: 76 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task649_race_blank_question_generation
* Dataset: task649_race_blank_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 36 tokens</li><li>mean: 253.19 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 252.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 157 tokens</li><li>mean: 254.12 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1536_daily_dialog_happiness_classification
* Dataset: task1536_daily_dialog_happiness_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 127.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 133.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 142.64 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task683_online_privacy_policy_text_purpose_answer_generation
* Dataset: task683_online_privacy_policy_text_purpose_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 29.93 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 30.22 tokens</li><li>max: 64 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 29.85 tokens</li><li>max: 68 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task024_cosmosqa_answer_generation
* Dataset: task024_cosmosqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 92.5 tokens</li><li>max: 176 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 93.22 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 94.89 tokens</li><li>max: 183 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task584_udeps_eng_fine_pos_tagging
* Dataset: task584_udeps_eng_fine_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 40.13 tokens</li><li>max: 120 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 39.18 tokens</li><li>max: 186 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.4 tokens</li><li>max: 148 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task066_timetravel_binary_consistency_classification
* Dataset: task066_timetravel_binary_consistency_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 66.89 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 67.42 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 67.0 tokens</li><li>max: 92 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task413_mickey_en_sentence_perturbation_generation
* Dataset: task413_mickey_en_sentence_perturbation_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 13.77 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.82 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 13.31 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task182_duorc_question_generation
* Dataset: task182_duorc_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 99 tokens</li><li>mean: 241.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 120 tokens</li><li>mean: 245.95 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 246.6 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task028_drop_answer_generation
* Dataset: task028_drop_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 76 tokens</li><li>mean: 230.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 86 tokens</li><li>mean: 234.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 81 tokens</li><li>mean: 235.71 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1601_webquestions_answer_generation
* Dataset: task1601_webquestions_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 16.47 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.67 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 16.76 tokens</li><li>max: 27 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1295_adversarial_qa_question_answering
* Dataset: task1295_adversarial_qa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 165.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 54 tokens</li><li>mean: 167.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 166.49 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task201_mnli_neutral_classification
* Dataset: task201_mnli_neutral_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 73.0 tokens</li><li>max: 218 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 73.42 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 72.48 tokens</li><li>max: 205 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task038_qasc_combined_fact
* Dataset: task038_qasc_combined_fact
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.3 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 30.49 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.87 tokens</li><li>max: 53 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task293_storycommonsense_emotion_text_generation
* Dataset: task293_storycommonsense_emotion_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 40.74 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 40.56 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 38.5 tokens</li><li>max: 86 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task572_recipe_nlg_text_generation
* Dataset: task572_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 114.82 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 121.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 124.38 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task517_emo_classify_emotion_of_dialogue
* Dataset: task517_emo_classify_emotion_of_dialogue
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 18.18 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.03 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 18.39 tokens</li><li>max: 67 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task382_hybridqa_answer_generation
* Dataset: task382_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 42.34 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 41.63 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 41.73 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task176_break_decompose_questions
* Dataset: task176_break_decompose_questions
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 17.39 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.19 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.71 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1291_multi_news_summarization
* Dataset: task1291_multi_news_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 116 tokens</li><li>mean: 255.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 255.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 68 tokens</li><li>mean: 252.09 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task155_count_nouns_verbs
* Dataset: task155_count_nouns_verbs
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 27.03 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.8 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 26.94 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task031_winogrande_question_generation_object
* Dataset: task031_winogrande_question_generation_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.42 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.31 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.27 tokens</li><li>max: 11 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task279_stereoset_classification_stereotype
* Dataset: task279_stereoset_classification_stereotype
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 17.91 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.43 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.2 tokens</li><li>max: 50 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1336_peixian_equity_evaluation_corpus_gender_classifier
* Dataset: task1336_peixian_equity_evaluation_corpus_gender_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.62 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.6 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.69 tokens</li><li>max: 16 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task508_scruples_dilemmas_more_ethical_isidentifiable
* Dataset: task508_scruples_dilemmas_more_ethical_isidentifiable
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 29.63 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.69 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 28.59 tokens</li><li>max: 86 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task518_emo_different_dialogue_emotions
* Dataset: task518_emo_different_dialogue_emotions
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 47.83 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 45.51 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 45.81 tokens</li><li>max: 123 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task077_splash_explanation_to_sql
* Dataset: task077_splash_explanation_to_sql
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 39.82 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 39.88 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 35.83 tokens</li><li>max: 111 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task923_event2mind_classifier
* Dataset: task923_event2mind_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 20.61 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 18.62 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 19.51 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task470_mrqa_question_generation
* Dataset: task470_mrqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 172.18 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 175.43 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 180.36 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task638_multi_woz_classification
* Dataset: task638_multi_woz_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 78 tokens</li><li>mean: 223.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 220.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 220.0 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1412_web_questions_question_answering
* Dataset: task1412_web_questions_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.33 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.18 tokens</li><li>max: 17 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.08 tokens</li><li>max: 16 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task847_pubmedqa_question_generation
* Dataset: task847_pubmedqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 248.66 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 248.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 249.11 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task678_ollie_actual_relationship_answer_generation
* Dataset: task678_ollie_actual_relationship_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 41.01 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 37.95 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 41.14 tokens</li><li>max: 104 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task290_tellmewhy_question_answerability
* Dataset: task290_tellmewhy_question_answerability
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 63.19 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.66 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 63.44 tokens</li><li>max: 95 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task575_air_dialogue_classification
* Dataset: task575_air_dialogue_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.16 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 13.55 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.3 tokens</li><li>max: 42 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task189_snli_neutral_to_contradiction_text_modification
* Dataset: task189_snli_neutral_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.82 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 30.75 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 33.25 tokens</li><li>max: 105 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task026_drop_question_generation
* Dataset: task026_drop_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 82 tokens</li><li>mean: 219.39 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 222.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 96 tokens</li><li>mean: 232.08 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task162_count_words_starting_with_letter
* Dataset: task162_count_words_starting_with_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 32.21 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.77 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.64 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task079_conala_concat_strings
* Dataset: task079_conala_concat_strings
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 39.62 tokens</li><li>max: 76 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 34.2 tokens</li><li>max: 80 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 33.53 tokens</li><li>max: 76 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task610_conllpp_ner
* Dataset: task610_conllpp_ner
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 19.55 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.27 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 14.12 tokens</li><li>max: 54 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task046_miscellaneous_question_typing
* Dataset: task046_miscellaneous_question_typing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 25.41 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 24.94 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 25.13 tokens</li><li>max: 57 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task197_mnli_domain_answer_generation
* Dataset: task197_mnli_domain_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 44.09 tokens</li><li>max: 197 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 44.97 tokens</li><li>max: 211 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 39.22 tokens</li><li>max: 115 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1325_qa_zre_question_generation_on_subject_relation
* Dataset: task1325_qa_zre_question_generation_on_subject_relation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 51.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 49.57 tokens</li><li>max: 180 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 54.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task430_senteval_subject_count
* Dataset: task430_senteval_subject_count
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.14 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.31 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.13 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task672_nummersense
* Dataset: task672_nummersense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.72 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.33 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.21 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task402_grailqa_paraphrase_generation
* Dataset: task402_grailqa_paraphrase_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 127.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 139.34 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 133.69 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task904_hate_speech_offensive_classification
* Dataset: task904_hate_speech_offensive_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 35.03 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 34.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 27.84 tokens</li><li>max: 148 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task192_hotpotqa_sentence_generation
* Dataset: task192_hotpotqa_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 125.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 123.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 134.16 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task069_abductivenli_classification
* Dataset: task069_abductivenli_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 52.09 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 52.16 tokens</li><li>max: 95 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 51.84 tokens</li><li>max: 95 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task574_air_dialogue_sentence_generation
* Dataset: task574_air_dialogue_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 54 tokens</li><li>mean: 143.98 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 143.52 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 66 tokens</li><li>mean: 147.45 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task187_snli_entailment_to_contradiction_text_modification
* Dataset: task187_snli_entailment_to_contradiction_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 30.23 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 29.82 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 29.44 tokens</li><li>max: 71 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task749_glucose_reverse_cause_emotion_detection
* Dataset: task749_glucose_reverse_cause_emotion_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 38 tokens</li><li>mean: 67.61 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 67.14 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.46 tokens</li><li>max: 107 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1552_scitail_question_generation
* Dataset: task1552_scitail_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 18.37 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 17.55 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.88 tokens</li><li>max: 54 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task750_aqua_multiple_choice_answering
* Dataset: task750_aqua_multiple_choice_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 69.62 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 67.98 tokens</li><li>max: 194 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 67.81 tokens</li><li>max: 165 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task327_jigsaw_classification_toxic
* Dataset: task327_jigsaw_classification_toxic
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 36.8 tokens</li><li>max: 234 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 40.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 45.53 tokens</li><li>max: 244 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1502_hatexplain_classification
* Dataset: task1502_hatexplain_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 28.69 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.7 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.92 tokens</li><li>max: 90 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task328_jigsaw_classification_insult
* Dataset: task328_jigsaw_classification_insult
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 51.02 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 60.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 64.19 tokens</li><li>max: 249 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task304_numeric_fused_head_resolution
* Dataset: task304_numeric_fused_head_resolution
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 120.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 122.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 134.06 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1293_kilt_tasks_hotpotqa_question_answering
* Dataset: task1293_kilt_tasks_hotpotqa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 24.78 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 24.2 tokens</li><li>max: 114 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.85 tokens</li><li>max: 84 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task216_rocstories_correct_answer_generation
* Dataset: task216_rocstories_correct_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 59.5 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 58.38 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 58.22 tokens</li><li>max: 95 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1326_qa_zre_question_generation_from_answer
* Dataset: task1326_qa_zre_question_generation_from_answer
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 46.37 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 45.05 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 49.47 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Dataset: task1338_peixian_equity_evaluation_corpus_sentiment_classifier
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.68 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.71 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.57 tokens</li><li>max: 17 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1729_personachat_generate_next
* Dataset: task1729_personachat_generate_next
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 146.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 142.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 50 tokens</li><li>mean: 144.22 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1202_atomic_classification_xneed
* Dataset: task1202_atomic_classification_xneed
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 19.55 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.39 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.22 tokens</li><li>max: 28 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task400_paws_paraphrase_classification
* Dataset: task400_paws_paraphrase_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 52.28 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 51.88 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 53.03 tokens</li><li>max: 97 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task502_scruples_anecdotes_whoiswrong_verification
* Dataset: task502_scruples_anecdotes_whoiswrong_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 229.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 236.43 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 235.02 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task088_identify_typo_verification
* Dataset: task088_identify_typo_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 15.08 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.05 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 15.39 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task221_rocstories_two_choice_classification
* Dataset: task221_rocstories_two_choice_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 47 tokens</li><li>mean: 72.64 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.66 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 73.26 tokens</li><li>max: 108 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task200_mnli_entailment_classification
* Dataset: task200_mnli_entailment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 72.63 tokens</li><li>max: 198 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 72.69 tokens</li><li>max: 224 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 73.44 tokens</li><li>max: 226 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task074_squad1.1_question_generation
* Dataset: task074_squad1.1_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 150.23 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 160.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 164.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task581_socialiqa_question_generation
* Dataset: task581_socialiqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 26.52 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 25.55 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 25.85 tokens</li><li>max: 48 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1186_nne_hrngo_classification
* Dataset: task1186_nne_hrngo_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 33.82 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.49 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 33.34 tokens</li><li>max: 77 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task898_freebase_qa_answer_generation
* Dataset: task898_freebase_qa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.18 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.45 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 17.48 tokens</li><li>max: 79 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1408_dart_similarity_classification
* Dataset: task1408_dart_similarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 59.48 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 61.95 tokens</li><li>max: 154 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 48.32 tokens</li><li>max: 124 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task168_strategyqa_question_decomposition
* Dataset: task168_strategyqa_question_decomposition
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 81.83 tokens</li><li>max: 181 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 79.75 tokens</li><li>max: 179 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 77.43 tokens</li><li>max: 166 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1357_xlsum_summary_generation
* Dataset: task1357_xlsum_summary_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 67 tokens</li><li>mean: 242.04 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 243.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 67 tokens</li><li>mean: 247.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task390_torque_text_span_selection
* Dataset: task390_torque_text_span_selection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 47 tokens</li><li>mean: 110.04 tokens</li><li>max: 196 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 110.49 tokens</li><li>max: 195 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 110.67 tokens</li><li>max: 196 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task165_mcscript_question_answering_commonsense
* Dataset: task165_mcscript_question_answering_commonsense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 147 tokens</li><li>mean: 198.24 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 145 tokens</li><li>mean: 196.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 198.41 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1533_daily_dialog_formal_classification
* Dataset: task1533_daily_dialog_formal_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 129.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 136.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 137.33 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task002_quoref_answer_generation
* Dataset: task002_quoref_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 214 tokens</li><li>mean: 255.54 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 214 tokens</li><li>mean: 255.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 224 tokens</li><li>mean: 255.61 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1297_qasc_question_answering
* Dataset: task1297_qasc_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 61 tokens</li><li>mean: 84.69 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 85.39 tokens</li><li>max: 130 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 84.83 tokens</li><li>max: 125 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task305_jeopardy_answer_generation_normal
* Dataset: task305_jeopardy_answer_generation_normal
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 27.72 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.43 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.37 tokens</li><li>max: 46 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task029_winogrande_full_object
* Dataset: task029_winogrande_full_object
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.37 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.32 tokens</li><li>max: 11 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.24 tokens</li><li>max: 10 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1327_qa_zre_answer_generation_from_question
* Dataset: task1327_qa_zre_answer_generation_from_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 55.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 52.2 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 55.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task326_jigsaw_classification_obscene
* Dataset: task326_jigsaw_classification_obscene
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 65.45 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 77.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 74.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1542_every_ith_element_from_starting
* Dataset: task1542_every_ith_element_from_starting
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 125.21 tokens</li><li>max: 245 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 123.54 tokens</li><li>max: 244 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 120.48 tokens</li><li>max: 238 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task570_recipe_nlg_ner_generation
* Dataset: task570_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 74.07 tokens</li><li>max: 250 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 73.6 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 76.08 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1409_dart_text_generation
* Dataset: task1409_dart_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 67.5 tokens</li><li>max: 174 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 72.52 tokens</li><li>max: 170 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 67.55 tokens</li><li>max: 164 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task401_numeric_fused_head_reference
* Dataset: task401_numeric_fused_head_reference
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 109.08 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 116.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 119.65 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task846_pubmedqa_classification
* Dataset: task846_pubmedqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 85.83 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 85.03 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 93.96 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1712_poki_classification
* Dataset: task1712_poki_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 52.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 55.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 63.01 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task344_hybridqa_answer_generation
* Dataset: task344_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 22.15 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 22.07 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 22.07 tokens</li><li>max: 55 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task875_emotion_classification
* Dataset: task875_emotion_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 23.03 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.42 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 20.36 tokens</li><li>max: 68 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1214_atomic_classification_xwant
* Dataset: task1214_atomic_classification_xwant
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 19.66 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.39 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 19.57 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task106_scruples_ethical_judgment
* Dataset: task106_scruples_ethical_judgment
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 29.85 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.96 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 28.77 tokens</li><li>max: 58 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task238_iirc_answer_from_passage_answer_generation
* Dataset: task238_iirc_answer_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 138 tokens</li><li>mean: 242.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 165 tokens</li><li>mean: 242.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 243.06 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1391_winogrande_easy_answer_generation
* Dataset: task1391_winogrande_easy_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 31.69 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 31.28 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 31.16 tokens</li><li>max: 49 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task195_sentiment140_classification
* Dataset: task195_sentiment140_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 22.62 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 18.82 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.32 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task163_count_words_ending_with_letter
* Dataset: task163_count_words_ending_with_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 32.06 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.69 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 31.58 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task579_socialiqa_classification
* Dataset: task579_socialiqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 54.2 tokens</li><li>max: 132 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 53.61 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 40 tokens</li><li>mean: 54.16 tokens</li><li>max: 84 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task569_recipe_nlg_text_generation
* Dataset: task569_recipe_nlg_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 193.73 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 55 tokens</li><li>mean: 193.64 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 198.12 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1602_webquestion_question_genreation
* Dataset: task1602_webquestion_question_genreation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 23.64 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 24.12 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 22.49 tokens</li><li>max: 120 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task747_glucose_cause_emotion_detection
* Dataset: task747_glucose_cause_emotion_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 68.15 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.3 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 68.79 tokens</li><li>max: 99 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task219_rocstories_title_answer_generation
* Dataset: task219_rocstories_title_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 42 tokens</li><li>mean: 67.71 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 66.7 tokens</li><li>max: 97 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 66.92 tokens</li><li>max: 96 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task178_quartz_question_answering
* Dataset: task178_quartz_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 57.78 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 57.44 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 56.86 tokens</li><li>max: 102 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task103_facts2story_long_text_generation
* Dataset: task103_facts2story_long_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 52 tokens</li><li>mean: 80.49 tokens</li><li>max: 143 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 82.22 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 78.96 tokens</li><li>max: 145 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task301_record_question_generation
* Dataset: task301_record_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 140 tokens</li><li>mean: 210.71 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 139 tokens</li><li>mean: 209.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 143 tokens</li><li>mean: 208.74 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1369_healthfact_sentence_generation
* Dataset: task1369_healthfact_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 110 tokens</li><li>mean: 243.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 243.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 113 tokens</li><li>mean: 251.67 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task515_senteval_odd_word_out
* Dataset: task515_senteval_odd_word_out
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 19.72 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.13 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 19.0 tokens</li><li>max: 35 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task496_semeval_answer_generation
* Dataset: task496_semeval_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 28.11 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 27.8 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 27.68 tokens</li><li>max: 45 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1658_billsum_summarization
* Dataset: task1658_billsum_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1204_atomic_classification_hinderedby
* Dataset: task1204_atomic_classification_hinderedby
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 22.1 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 22.07 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.5 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1392_superglue_multirc_answer_verification
* Dataset: task1392_superglue_multirc_answer_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 128 tokens</li><li>mean: 241.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 127 tokens</li><li>mean: 241.97 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 136 tokens</li><li>mean: 242.04 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task306_jeopardy_answer_generation_double
* Dataset: task306_jeopardy_answer_generation_double
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 27.79 tokens</li><li>max: 47 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 27.16 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 27.61 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1286_openbookqa_question_answering
* Dataset: task1286_openbookqa_question_answering
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 39.54 tokens</li><li>max: 85 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 38.94 tokens</li><li>max: 96 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 38.26 tokens</li><li>max: 89 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task159_check_frequency_of_words_in_sentence_pair
* Dataset: task159_check_frequency_of_words_in_sentence_pair
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 50.37 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.35 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 50.61 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task151_tomqa_find_location_easy_clean
* Dataset: task151_tomqa_find_location_easy_clean
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 37 tokens</li><li>mean: 50.73 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.28 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 37 tokens</li><li>mean: 50.52 tokens</li><li>max: 74 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task323_jigsaw_classification_sexually_explicit
* Dataset: task323_jigsaw_classification_sexually_explicit
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 66.26 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 76.73 tokens</li><li>max: 248 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 75.5 tokens</li><li>max: 251 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task037_qasc_generate_related_fact
* Dataset: task037_qasc_generate_related_fact
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 22.04 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 22.03 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 21.9 tokens</li><li>max: 40 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task027_drop_answer_type_generation
* Dataset: task027_drop_answer_type_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 87 tokens</li><li>mean: 229.02 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 74 tokens</li><li>mean: 230.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 71 tokens</li><li>mean: 232.43 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1596_event2mind_text_generation_2
* Dataset: task1596_event2mind_text_generation_2
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.97 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.03 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.06 tokens</li><li>max: 18 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task141_odd-man-out_classification_category
* Dataset: task141_odd-man-out_classification_category
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 18.45 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.38 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 18.46 tokens</li><li>max: 25 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task194_duorc_answer_generation
* Dataset: task194_duorc_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 149 tokens</li><li>mean: 251.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 147 tokens</li><li>mean: 252.05 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 148 tokens</li><li>mean: 251.76 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task679_hope_edi_english_text_classification
* Dataset: task679_hope_edi_english_text_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 27.77 tokens</li><li>max: 199 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 27.23 tokens</li><li>max: 205 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 29.87 tokens</li><li>max: 194 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task246_dream_question_generation
* Dataset: task246_dream_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 80.33 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 80.74 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 87.22 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1195_disflqa_disfluent_to_fluent_conversion
* Dataset: task1195_disflqa_disfluent_to_fluent_conversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 19.76 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 19.88 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.2 tokens</li><li>max: 44 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task065_timetravel_consistent_sentence_classification
* Dataset: task065_timetravel_consistent_sentence_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 55 tokens</li><li>mean: 79.4 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 79.17 tokens</li><li>max: 110 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 80.1 tokens</li><li>max: 110 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task351_winomt_classification_gender_identifiability_anti
* Dataset: task351_winomt_classification_gender_identifiability_anti
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 21.76 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.66 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.78 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task580_socialiqa_answer_generation
* Dataset: task580_socialiqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 52.41 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 51.02 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 50.98 tokens</li><li>max: 87 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task583_udeps_eng_coarse_pos_tagging
* Dataset: task583_udeps_eng_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 41.24 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.21 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 40.93 tokens</li><li>max: 185 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task202_mnli_contradiction_classification
* Dataset: task202_mnli_contradiction_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 73.7 tokens</li><li>max: 190 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 76.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 74.56 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task222_rocstories_two_chioce_slotting_classification
* Dataset: task222_rocstories_two_chioce_slotting_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 73.06 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 73.24 tokens</li><li>max: 100 tokens</li></ul> | <ul><li>min: 49 tokens</li><li>mean: 71.71 tokens</li><li>max: 102 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task498_scruples_anecdotes_whoiswrong_classification
* Dataset: task498_scruples_anecdotes_whoiswrong_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 225.8 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 232.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 231.22 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task067_abductivenli_answer_generation
* Dataset: task067_abductivenli_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 26.75 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 26.13 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 26.34 tokens</li><li>max: 38 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task616_cola_classification
* Dataset: task616_cola_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 12.16 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 12.05 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.96 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task286_olid_offense_judgment
* Dataset: task286_olid_offense_judgment
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 32.85 tokens</li><li>max: 145 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.81 tokens</li><li>max: 171 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 30.26 tokens</li><li>max: 169 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task188_snli_neutral_to_entailment_text_modification
* Dataset: task188_snli_neutral_to_entailment_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 31.55 tokens</li><li>max: 79 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.31 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 32.91 tokens</li><li>max: 84 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task223_quartz_explanation_generation
* Dataset: task223_quartz_explanation_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 31.46 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 31.8 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 28.95 tokens</li><li>max: 96 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task820_protoqa_answer_generation
* Dataset: task820_protoqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.87 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.54 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.22 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task196_sentiment140_answer_generation
* Dataset: task196_sentiment140_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 36.26 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 32.85 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 36.27 tokens</li><li>max: 72 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1678_mathqa_answer_selection
* Dataset: task1678_mathqa_answer_selection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 33 tokens</li><li>mean: 70.42 tokens</li><li>max: 177 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 68.99 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 69.69 tokens</li><li>max: 160 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task349_squad2.0_answerable_unanswerable_question_classification
* Dataset: task349_squad2.0_answerable_unanswerable_question_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 53 tokens</li><li>mean: 176.83 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 57 tokens</li><li>mean: 177.07 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 176.78 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task154_tomqa_find_location_hard_noise
* Dataset: task154_tomqa_find_location_hard_noise
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 129 tokens</li><li>mean: 176.29 tokens</li><li>max: 253 tokens</li></ul> | <ul><li>min: 126 tokens</li><li>mean: 176.3 tokens</li><li>max: 249 tokens</li></ul> | <ul><li>min: 128 tokens</li><li>mean: 178.24 tokens</li><li>max: 254 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task333_hateeval_classification_hate_en
* Dataset: task333_hateeval_classification_hate_en
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 38.33 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 36.79 tokens</li><li>max: 109 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 36.61 tokens</li><li>max: 113 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task235_iirc_question_from_subtext_answer_generation
* Dataset: task235_iirc_question_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 52.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 50.44 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 55.89 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1554_scitail_classification
* Dataset: task1554_scitail_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.8 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 25.75 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 24.34 tokens</li><li>max: 59 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task210_logic2text_structured_text_generation
* Dataset: task210_logic2text_structured_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 31.88 tokens</li><li>max: 101 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 30.88 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 32.75 tokens</li><li>max: 89 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task035_winogrande_question_modification_person
* Dataset: task035_winogrande_question_modification_person
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 36.16 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.75 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 35.41 tokens</li><li>max: 48 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task230_iirc_passage_classification
* Dataset: task230_iirc_passage_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 256 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1356_xlsum_title_generation
* Dataset: task1356_xlsum_title_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 59 tokens</li><li>mean: 239.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 58 tokens</li><li>mean: 240.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 64 tokens</li><li>mean: 248.75 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1726_mathqa_correct_answer_generation
* Dataset: task1726_mathqa_correct_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 43.81 tokens</li><li>max: 156 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 42.63 tokens</li><li>max: 129 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 42.82 tokens</li><li>max: 133 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task302_record_classification
* Dataset: task302_record_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 194 tokens</li><li>mean: 253.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 198 tokens</li><li>mean: 252.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 195 tokens</li><li>mean: 252.78 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task380_boolq_yes_no_question
* Dataset: task380_boolq_yes_no_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 134.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 26 tokens</li><li>mean: 138.56 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 138.25 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task212_logic2text_classification
* Dataset: task212_logic2text_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 33.28 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 32.14 tokens</li><li>max: 146 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 32.96 tokens</li><li>max: 127 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task748_glucose_reverse_cause_event_detection
* Dataset: task748_glucose_reverse_cause_event_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 35 tokens</li><li>mean: 67.63 tokens</li><li>max: 105 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 66.95 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 68.94 tokens</li><li>max: 105 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task834_mathdataset_classification
* Dataset: task834_mathdataset_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 27.7 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 27.88 tokens</li><li>max: 83 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 26.97 tokens</li><li>max: 93 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task350_winomt_classification_gender_identifiability_pro
* Dataset: task350_winomt_classification_gender_identifiability_pro
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.63 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 21.79 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task191_hotpotqa_question_generation
* Dataset: task191_hotpotqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 198 tokens</li><li>mean: 255.88 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 238 tokens</li><li>mean: 255.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 255 tokens</li><li>mean: 256.0 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task236_iirc_question_from_passage_answer_generation
* Dataset: task236_iirc_question_from_passage_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 135 tokens</li><li>mean: 238.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 237.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 154 tokens</li><li>mean: 239.64 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task217_rocstories_ordering_answer_generation
* Dataset: task217_rocstories_ordering_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 45 tokens</li><li>mean: 72.32 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.29 tokens</li><li>max: 107 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 70.87 tokens</li><li>max: 105 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task568_circa_question_generation
* Dataset: task568_circa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.6 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.46 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 8.93 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task614_glucose_cause_event_detection
* Dataset: task614_glucose_cause_event_detection
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 67.66 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 67.16 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 68.48 tokens</li><li>max: 103 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task361_spolin_yesand_prompt_response_classification
* Dataset: task361_spolin_yesand_prompt_response_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 47.01 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 46.18 tokens</li><li>max: 119 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 47.2 tokens</li><li>max: 128 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task421_persent_sentence_sentiment_classification
* Dataset: task421_persent_sentence_sentiment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 67.77 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 71.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 72.24 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task203_mnli_sentence_generation
* Dataset: task203_mnli_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 38.73 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.74 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 34.18 tokens</li><li>max: 170 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task420_persent_document_sentiment_classification
* Dataset: task420_persent_document_sentiment_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 224.14 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 233.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 227.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task153_tomqa_find_location_hard_clean
* Dataset: task153_tomqa_find_location_hard_clean
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 39 tokens</li><li>mean: 160.13 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 159.86 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 162.75 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task346_hybridqa_classification
* Dataset: task346_hybridqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 32.87 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.92 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 31.83 tokens</li><li>max: 75 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1211_atomic_classification_hassubevent
* Dataset: task1211_atomic_classification_hassubevent
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 16.25 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.02 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 16.89 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task360_spolin_yesand_response_generation
* Dataset: task360_spolin_yesand_response_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 22.54 tokens</li><li>max: 89 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 21.16 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.91 tokens</li><li>max: 67 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task510_reddit_tifu_title_summarization
* Dataset: task510_reddit_tifu_title_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 217.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 218.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 221.41 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task511_reddit_tifu_long_text_summarization
* Dataset: task511_reddit_tifu_long_text_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 239.72 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 76 tokens</li><li>mean: 238.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 245.03 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task345_hybridqa_answer_generation
* Dataset: task345_hybridqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 22.14 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 21.6 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 20.96 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task270_csrg_counterfactual_context_generation
* Dataset: task270_csrg_counterfactual_context_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 63 tokens</li><li>mean: 100.05 tokens</li><li>max: 158 tokens</li></ul> | <ul><li>min: 63 tokens</li><li>mean: 98.61 tokens</li><li>max: 142 tokens</li></ul> | <ul><li>min: 62 tokens</li><li>mean: 100.35 tokens</li><li>max: 141 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task307_jeopardy_answer_generation_final
* Dataset: task307_jeopardy_answer_generation_final
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 29.61 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.31 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 29.28 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task001_quoref_question_generation
* Dataset: task001_quoref_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 201 tokens</li><li>mean: 254.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 99 tokens</li><li>mean: 254.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 173 tokens</li><li>mean: 255.13 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task089_swap_words_verification
* Dataset: task089_swap_words_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 12.86 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.64 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 12.26 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1196_atomic_classification_oeffect
* Dataset: task1196_atomic_classification_oeffect
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 18.79 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.57 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 18.51 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task080_piqa_answer_generation
* Dataset: task080_piqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.82 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.77 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.03 tokens</li><li>max: 26 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1598_nyc_long_text_generation
* Dataset: task1598_nyc_long_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 35.5 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 35.66 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 36.66 tokens</li><li>max: 55 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task240_tweetqa_question_generation
* Dataset: task240_tweetqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 51.18 tokens</li><li>max: 94 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 50.72 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 51.63 tokens</li><li>max: 95 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task615_moviesqa_answer_generation
* Dataset: task615_moviesqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 11.46 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 11.44 tokens</li><li>max: 19 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.4 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1347_glue_sts-b_similarity_classification
* Dataset: task1347_glue_sts-b_similarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 31.13 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 31.12 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.85 tokens</li><li>max: 92 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task114_is_the_given_word_longest
* Dataset: task114_is_the_given_word_longest
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 28.87 tokens</li><li>max: 68 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.46 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 28.7 tokens</li><li>max: 47 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task292_storycommonsense_character_text_generation
* Dataset: task292_storycommonsense_character_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 43 tokens</li><li>mean: 67.87 tokens</li><li>max: 98 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 67.11 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 69.05 tokens</li><li>max: 96 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task115_help_advice_classification
* Dataset: task115_help_advice_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 19.89 tokens</li><li>max: 91 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.13 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.22 tokens</li><li>max: 137 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task431_senteval_object_count
* Dataset: task431_senteval_object_count
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.78 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.12 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.72 tokens</li><li>max: 35 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1360_numer_sense_multiple_choice_qa_generation
* Dataset: task1360_numer_sense_multiple_choice_qa_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 32 tokens</li><li>mean: 40.62 tokens</li><li>max: 54 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.3 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 40.28 tokens</li><li>max: 60 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task177_para-nmt_paraphrasing
* Dataset: task177_para-nmt_paraphrasing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 19.86 tokens</li><li>max: 82 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.91 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.22 tokens</li><li>max: 36 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task132_dais_text_modification
* Dataset: task132_dais_text_modification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.3 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.08 tokens</li><li>max: 15 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.11 tokens</li><li>max: 15 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task269_csrg_counterfactual_story_generation
* Dataset: task269_csrg_counterfactual_story_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 49 tokens</li><li>mean: 79.95 tokens</li><li>max: 111 tokens</li></ul> | <ul><li>min: 53 tokens</li><li>mean: 79.51 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 79.5 tokens</li><li>max: 114 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task233_iirc_link_exists_classification
* Dataset: task233_iirc_link_exists_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 145 tokens</li><li>mean: 235.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 233.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 151 tokens</li><li>mean: 235.1 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task161_count_words_containing_letter
* Dataset: task161_count_words_containing_letter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 27 tokens</li><li>mean: 30.99 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.8 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 30.5 tokens</li><li>max: 42 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1205_atomic_classification_isafter
* Dataset: task1205_atomic_classification_isafter
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 20.91 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.65 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.51 tokens</li><li>max: 37 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task571_recipe_nlg_ner_generation
* Dataset: task571_recipe_nlg_ner_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 118.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 118.92 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 111.39 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1292_yelp_review_full_text_categorization
* Dataset: task1292_yelp_review_full_text_categorization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 136.66 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 146.65 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 146.05 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task428_senteval_inversion
* Dataset: task428_senteval_inversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.69 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.58 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.26 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task311_race_question_generation
* Dataset: task311_race_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 115 tokens</li><li>mean: 254.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 137 tokens</li><li>mean: 254.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 255.44 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task429_senteval_tense
* Dataset: task429_senteval_tense
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.84 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.96 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.25 tokens</li><li>max: 36 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task403_creak_commonsense_inference
* Dataset: task403_creak_commonsense_inference
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 30.24 tokens</li><li>max: 104 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.39 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 29.32 tokens</li><li>max: 122 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task929_products_reviews_classification
* Dataset: task929_products_reviews_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 69.68 tokens</li><li>max: 126 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.66 tokens</li><li>max: 123 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 70.61 tokens</li><li>max: 123 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task582_naturalquestion_answer_generation
* Dataset: task582_naturalquestion_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.65 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 11.73 tokens</li><li>max: 25 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task237_iirc_answer_from_subtext_answer_generation
* Dataset: task237_iirc_answer_from_subtext_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 22 tokens</li><li>mean: 66.3 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 64.61 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 61.49 tokens</li><li>max: 161 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task050_multirc_answerability
* Dataset: task050_multirc_answerability
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 15 tokens</li><li>mean: 32.3 tokens</li><li>max: 112 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 31.56 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 32.13 tokens</li><li>max: 159 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task184_break_generate_question
* Dataset: task184_break_generate_question
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 39.73 tokens</li><li>max: 147 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.83 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 39.61 tokens</li><li>max: 148 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task669_ambigqa_answer_generation
* Dataset: task669_ambigqa_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 12.94 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 12.88 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.76 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task169_strategyqa_sentence_generation
* Dataset: task169_strategyqa_sentence_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 19 tokens</li><li>mean: 35.21 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 22 tokens</li><li>mean: 34.25 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 33.3 tokens</li><li>max: 65 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task500_scruples_anecdotes_title_generation
* Dataset: task500_scruples_anecdotes_title_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 225.76 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 233.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 235.28 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task241_tweetqa_classification
* Dataset: task241_tweetqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 31 tokens</li><li>mean: 61.75 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 62.23 tokens</li><li>max: 106 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 61.7 tokens</li><li>max: 92 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1345_glue_qqp_question_paraprashing
* Dataset: task1345_glue_qqp_question_paraprashing
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.86 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.83 tokens</li><li>max: 69 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.62 tokens</li><li>max: 51 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task218_rocstories_swap_order_answer_generation
* Dataset: task218_rocstories_swap_order_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 48 tokens</li><li>mean: 72.41 tokens</li><li>max: 118 tokens</li></ul> | <ul><li>min: 48 tokens</li><li>mean: 72.48 tokens</li><li>max: 102 tokens</li></ul> | <ul><li>min: 47 tokens</li><li>mean: 72.1 tokens</li><li>max: 106 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task613_politifact_text_generation
* Dataset: task613_politifact_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 24.87 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 23.39 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 23.07 tokens</li><li>max: 61 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1167_penn_treebank_coarse_pos_tagging
* Dataset: task1167_penn_treebank_coarse_pos_tagging
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 53.65 tokens</li><li>max: 200 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 53.64 tokens</li><li>max: 220 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 54.8 tokens</li><li>max: 202 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1422_mathqa_physics
* Dataset: task1422_mathqa_physics
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 72.71 tokens</li><li>max: 164 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 71.93 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 72.67 tokens</li><li>max: 155 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task247_dream_answer_generation
* Dataset: task247_dream_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 38 tokens</li><li>mean: 160.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 39 tokens</li><li>mean: 159.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 167.8 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task199_mnli_classification
* Dataset: task199_mnli_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 13 tokens</li><li>mean: 43.07 tokens</li><li>max: 127 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 44.72 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 43.81 tokens</li><li>max: 113 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task164_mcscript_question_answering_text
* Dataset: task164_mcscript_question_answering_text
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 150 tokens</li><li>mean: 200.63 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 150 tokens</li><li>mean: 200.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 142 tokens</li><li>mean: 200.85 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1541_agnews_classification
* Dataset: task1541_agnews_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 53.59 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 53.09 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 53.95 tokens</li><li>max: 161 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task516_senteval_conjoints_inversion
* Dataset: task516_senteval_conjoints_inversion
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 20.33 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 19.01 tokens</li><li>max: 34 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 18.96 tokens</li><li>max: 34 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task294_storycommonsense_motiv_text_generation
* Dataset: task294_storycommonsense_motiv_text_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 40.09 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 40.77 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 39.86 tokens</li><li>max: 86 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task501_scruples_anecdotes_post_type_verification
* Dataset: task501_scruples_anecdotes_post_type_verification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 231.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 235.21 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 234.47 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task213_rocstories_correct_ending_classification
* Dataset: task213_rocstories_correct_ending_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 62 tokens</li><li>mean: 86.17 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 60 tokens</li><li>mean: 85.49 tokens</li><li>max: 131 tokens</li></ul> | <ul><li>min: 59 tokens</li><li>mean: 86.18 tokens</li><li>max: 131 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task821_protoqa_question_generation
* Dataset: task821_protoqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 14.6 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 14.95 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 13.89 tokens</li><li>max: 93 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task493_review_polarity_classification
* Dataset: task493_review_polarity_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 18 tokens</li><li>mean: 100.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 107.28 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 113.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task308_jeopardy_answer_generation_all
* Dataset: task308_jeopardy_answer_generation_all
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 27.9 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 26.98 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 27.48 tokens</li><li>max: 48 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1595_event2mind_text_generation_1
* Dataset: task1595_event2mind_text_generation_1
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.86 tokens</li><li>max: 18 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 9.97 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 10.02 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task040_qasc_question_generation
* Dataset: task040_qasc_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 15.04 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.05 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 13.84 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task231_iirc_link_classification
* Dataset: task231_iirc_link_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 179 tokens</li><li>mean: 246.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 170 tokens</li><li>mean: 245.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 161 tokens</li><li>mean: 247.13 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1727_wiqa_what_is_the_effect
* Dataset: task1727_wiqa_what_is_the_effect
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 95.17 tokens</li><li>max: 183 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 95.18 tokens</li><li>max: 185 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 95.42 tokens</li><li>max: 183 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task578_curiosity_dialogs_answer_generation
* Dataset: task578_curiosity_dialogs_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 229.66 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 118 tokens</li><li>mean: 235.49 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 229.46 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task310_race_classification
* Dataset: task310_race_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 101 tokens</li><li>mean: 254.9 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 218 tokens</li><li>mean: 255.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 101 tokens</li><li>mean: 254.9 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task309_race_answer_generation
* Dataset: task309_race_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 75 tokens</li><li>mean: 254.99 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 204 tokens</li><li>mean: 255.6 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 75 tokens</li><li>mean: 255.19 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task379_agnews_topic_classification
* Dataset: task379_agnews_topic_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 20 tokens</li><li>mean: 54.89 tokens</li><li>max: 193 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 54.64 tokens</li><li>max: 175 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 54.78 tokens</li><li>max: 187 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task030_winogrande_full_person
* Dataset: task030_winogrande_full_person
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 7.59 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.49 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 7.38 tokens</li><li>max: 11 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1540_parsed_pdfs_summarization
* Dataset: task1540_parsed_pdfs_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 188.4 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 46 tokens</li><li>mean: 190.16 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 192.07 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task039_qasc_find_overlapping_words
* Dataset: task039_qasc_find_overlapping_words
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 16 tokens</li><li>mean: 30.48 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.05 tokens</li><li>max: 57 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 30.65 tokens</li><li>max: 60 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1206_atomic_classification_isbefore
* Dataset: task1206_atomic_classification_isbefore
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 21.2 tokens</li><li>max: 40 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 20.77 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 21.41 tokens</li><li>max: 31 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task157_count_vowels_and_consonants
* Dataset: task157_count_vowels_and_consonants
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 24 tokens</li><li>mean: 28.0 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 27.91 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 28.3 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task339_record_answer_generation
* Dataset: task339_record_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 171 tokens</li><li>mean: 235.1 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 234.38 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 171 tokens</li><li>mean: 232.38 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task453_swag_answer_generation
* Dataset: task453_swag_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 18.56 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 18.16 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 17.5 tokens</li><li>max: 55 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task848_pubmedqa_classification
* Dataset: task848_pubmedqa_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 21 tokens</li><li>mean: 248.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 250.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 84 tokens</li><li>mean: 251.62 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task673_google_wellformed_query_classification
* Dataset: task673_google_wellformed_query_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 11.6 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.22 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.34 tokens</li><li>max: 22 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task676_ollie_relationship_answer_generation
* Dataset: task676_ollie_relationship_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 29 tokens</li><li>mean: 50.99 tokens</li><li>max: 113 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 49.39 tokens</li><li>max: 134 tokens</li></ul> | <ul><li>min: 30 tokens</li><li>mean: 51.48 tokens</li><li>max: 113 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task268_casehold_legal_answer_generation
* Dataset: task268_casehold_legal_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 235 tokens</li><li>mean: 255.96 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 156 tokens</li><li>mean: 255.46 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 226 tokens</li><li>mean: 255.94 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task844_financial_phrasebank_classification
* Dataset: task844_financial_phrasebank_classification
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 39.8 tokens</li><li>max: 86 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 38.45 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 39.06 tokens</li><li>max: 86 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task330_gap_answer_generation
* Dataset: task330_gap_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 26 tokens</li><li>mean: 106.78 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 44 tokens</li><li>mean: 108.12 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 45 tokens</li><li>mean: 110.93 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task595_mocha_answer_generation
* Dataset: task595_mocha_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 44 tokens</li><li>mean: 94.08 tokens</li><li>max: 178 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 97.06 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 118.77 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task1285_kpa_keypoint_matching
* Dataset: task1285_kpa_keypoint_matching
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 30 tokens</li><li>mean: 52.36 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 50.14 tokens</li><li>max: 84 tokens</li></ul> | <ul><li>min: 31 tokens</li><li>mean: 53.21 tokens</li><li>max: 88 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task234_iirc_passage_line_answer_generation
* Dataset: task234_iirc_passage_line_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 143 tokens</li><li>mean: 235.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 155 tokens</li><li>mean: 235.25 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 146 tokens</li><li>mean: 236.25 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task494_review_polarity_answer_generation
* Dataset: task494_review_polarity_answer_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 106.0 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 23 tokens</li><li>mean: 112.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 112.66 tokens</li><li>max: 249 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task670_ambigqa_question_generation
* Dataset: task670_ambigqa_question_generation
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 12.66 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.48 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 12.24 tokens</li><li>max: 18 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### task289_gigaword_summarization
* Dataset: task289_gigaword_summarization
* Size: 1,018 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 25 tokens</li><li>mean: 51.53 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 27 tokens</li><li>mean: 52.0 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 25 tokens</li><li>mean: 51.44 tokens</li><li>max: 87 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### npr
* Dataset: npr
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 12.74 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 152.32 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 119.75 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### nli
* Dataset: nli
* Size: 49,676 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 21.62 tokens</li><li>max: 108 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.07 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 12.21 tokens</li><li>max: 44 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### SimpleWiki
* Dataset: SimpleWiki
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 29.35 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 33.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 56.42 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### amazon_review_2018
* Dataset: amazon_review_2018
* Size: 99,352 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 11.86 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 88.89 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 70.8 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### ccnews_title_text
* Dataset: ccnews_title_text
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.24 tokens</li><li>max: 59 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 210.26 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 20 tokens</li><li>mean: 194.92 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### agnews
* Dataset: agnews
* Size: 44,606 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.73 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 10 tokens</li><li>mean: 39.85 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 45.43 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### xsum
* Dataset: xsum
* Size: 10,140 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 27.77 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 226.87 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 41 tokens</li><li>mean: 232.14 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### msmarco
* Dataset: msmarco
* Size: 173,354 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 9.07 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 82.14 tokens</li><li>max: 237 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 80.54 tokens</li><li>max: 252 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### yahoo_answers_title_answer
* Dataset: yahoo_answers_title_answer
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 16.73 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 82.94 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 86.15 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### squad_pairs
* Dataset: squad_pairs
* Size: 24,838 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 14.05 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 153.91 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 34 tokens</li><li>mean: 162.67 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### wow
* Dataset: wow
* Size: 29,908 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 88.36 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 100 tokens</li><li>mean: 112.02 tokens</li><li>max: 150 tokens</li></ul> | <ul><li>min: 83 tokens</li><li>mean: 113.07 tokens</li><li>max: 147 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_counterfactual-avs_triplets
* Dataset: mteb-amazon_counterfactual-avs_triplets
* Size: 4,055 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 27.68 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.84 tokens</li><li>max: 137 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 26.34 tokens</li><li>max: 91 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_intent-avs_triplets
* Dataset: mteb-amazon_massive_intent-avs_triplets
* Size: 11,661 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:--------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.5 tokens</li><li>max: 28 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.05 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.45 tokens</li><li>max: 25 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_massive_scenario-avs_triplets
* Dataset: mteb-amazon_massive_scenario-avs_triplets
* Size: 11,661 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 9.62 tokens</li><li>max: 39 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.19 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 9.59 tokens</li><li>max: 24 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-amazon_reviews_multi-avs_triplets
* Dataset: mteb-amazon_reviews_multi-avs_triplets
* Size: 198,192 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 49.55 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 49.51 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 48.42 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-banking77-avs_triplets
* Dataset: mteb-banking77-avs_triplets
* Size: 10,139 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 15.81 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.77 tokens</li><li>max: 73 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 16.1 tokens</li><li>max: 73 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-emotion-avs_triplets
* Dataset: mteb-emotion-avs_triplets
* Size: 16,224 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 22.04 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 17.71 tokens</li><li>max: 65 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 21.99 tokens</li><li>max: 72 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-imdb-avs_triplets
* Dataset: mteb-imdb-avs_triplets
* Size: 24,839 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 34 tokens</li><li>mean: 207.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 36 tokens</li><li>mean: 223.93 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 42 tokens</li><li>mean: 206.87 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_domain-avs_triplets
* Dataset: mteb-mtop_domain-avs_triplets
* Size: 15,715 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 10.27 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.62 tokens</li><li>max: 24 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.01 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-mtop_intent-avs_triplets
* Dataset: mteb-mtop_intent-avs_triplets
* Size: 15,715 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 10.22 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.74 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 10.43 tokens</li><li>max: 28 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-toxic_conversations_50k-avs_triplets
* Dataset: mteb-toxic_conversations_50k-avs_triplets
* Size: 49,677 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 67.17 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 88.29 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 64.96 tokens</li><li>max: 252 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### mteb-tweet_sentiment_extraction-avs_triplets
* Dataset: mteb-tweet_sentiment_extraction-avs_triplets
* Size: 27,373 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 20.58 tokens</li><li>max: 45 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 20.26 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 21.1 tokens</li><li>max: 59 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### covid-bing-query-gpt4-avs_triplets
* Dataset: covid-bing-query-gpt4-avs_triplets
* Size: 5,070 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 15.28 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 37.6 tokens</li><li>max: 92 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 38.13 tokens</li><li>max: 239 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 18,269 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 16.04 tokens</li><li>max: 55 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 142.75 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 144.56 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `gradient_checkpointing`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 512
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | medi-mteb-dev_max_accuracy |
|:------:|:-----:|:-------------:|:------:|:--------------------------:|
| 0 | 0 | - | - | 0.8705 |
| 0.1308 | 500 | 2.1744 | 1.5723 | 0.8786 |
| 0.2616 | 1000 | 1.9245 | 1.5045 | 0.8851 |
| 0.3925 | 1500 | 1.9833 | 1.4719 | 0.8882 |
| 0.5233 | 2000 | 1.7492 | 1.4434 | 0.8909 |
| 0.6541 | 2500 | 1.8815 | 1.4244 | 0.8935 |
| 0.7849 | 3000 | 1.7921 | 1.4064 | 0.8949 |
| 0.9158 | 3500 | 1.8495 | 1.3894 | 0.8956 |
| 1.0466 | 4000 | 1.7415 | 1.3744 | 0.8966 |
| 1.1774 | 4500 | 1.8663 | 1.3619 | 0.9005 |
| 1.3082 | 5000 | 1.7016 | 1.3520 | 0.8979 |
| 1.4390 | 5500 | 1.7308 | 1.3467 | 0.9007 |
| 1.5699 | 6000 | 1.6965 | 1.3346 | 0.9021 |
| 1.7007 | 6500 | 1.7355 | 1.3251 | 0.9018 |
| 1.8315 | 7000 | 1.6783 | 1.3156 | 0.9031 |
| 1.9623 | 7500 | 1.6381 | 1.3101 | 0.9047 |
| 2.0931 | 8000 | 1.7169 | 1.3056 | 0.9044 |
| 2.2240 | 8500 | 1.6527 | 1.3070 | 0.9039 |
| 2.3548 | 9000 | 1.7078 | 1.2977 | 0.9055 |
| 2.4856 | 9500 | 1.533 | 1.2991 | 0.9050 |
| 2.6164 | 10000 | 1.6676 | 1.2916 | 0.9057 |
| 2.7473 | 10500 | 1.5866 | 1.2885 | 0.9053 |
| 2.8781 | 11000 | 1.641 | 1.2765 | 0.9066 |
| 3.0089 | 11500 | 1.5193 | 1.2816 | 0.9062 |
| 3.1397 | 12000 | 1.6907 | 1.2804 | 0.9065 |
| 3.2705 | 12500 | 1.557 | 1.2684 | 0.9065 |
| 3.4014 | 13000 | 1.6808 | 1.2711 | 0.9075 |
| 3.5322 | 13500 | 1.4751 | 1.2700 | 0.9072 |
| 3.6630 | 14000 | 1.5934 | 1.2692 | 0.9081 |
| 3.7938 | 14500 | 1.5395 | 1.2672 | 0.9087 |
| 3.9246 | 15000 | 1.5809 | 1.2678 | 0.9072 |
| 4.0555 | 15500 | 1.4972 | 1.2621 | 0.9089 |
| 4.1863 | 16000 | 1.614 | 1.2690 | 0.9070 |
| 4.3171 | 16500 | 1.5186 | 1.2625 | 0.9091 |
| 4.4479 | 17000 | 1.5239 | 1.2629 | 0.9079 |
| 4.5788 | 17500 | 1.5354 | 1.2569 | 0.9086 |
| 4.7096 | 18000 | 1.5134 | 1.2559 | 0.9095 |
| 4.8404 | 18500 | 1.5237 | 1.2494 | 0.9100 |
| 4.9712 | 19000 | 1.5038 | 1.2486 | 0.9113 |
| 5.1020 | 19500 | 1.5527 | 1.2493 | 0.9098 |
| 5.2329 | 20000 | 1.5018 | 1.2521 | 0.9102 |
| 5.3637 | 20500 | 1.584 | 1.2496 | 0.9095 |
| 5.4945 | 21000 | 1.3948 | 1.2467 | 0.9102 |
| 5.6253 | 21500 | 1.5118 | 1.2487 | 0.9098 |
| 5.7561 | 22000 | 1.458 | 1.2471 | 0.9098 |
| 5.8870 | 22500 | 1.5158 | 1.2367 | 0.9105 |
| 6.0178 | 23000 | 1.4091 | 1.2480 | 0.9096 |
| 6.1486 | 23500 | 1.5823 | 1.2456 | 0.9114 |
| 6.2794 | 24000 | 1.4383 | 1.2404 | 0.9101 |
| 6.4103 | 24500 | 1.5606 | 1.2431 | 0.9100 |
| 6.5411 | 25000 | 1.3906 | 1.2386 | 0.9112 |
| 6.6719 | 25500 | 1.4887 | 1.2382 | 0.9103 |
| 6.8027 | 26000 | 1.4347 | 1.2384 | 0.9112 |
| 6.9335 | 26500 | 1.4733 | 1.2395 | 0.9113 |
| 7.0644 | 27000 | 1.4323 | 1.2385 | 0.9111 |
| 7.1952 | 27500 | 1.505 | 1.2413 | 0.9107 |
| 7.3260 | 28000 | 1.4648 | 1.2362 | 0.9114 |
| 7.4568 | 28500 | 1.4252 | 1.2361 | 0.9116 |
| 7.5877 | 29000 | 1.458 | 1.2344 | 0.9118 |
| 7.7185 | 29500 | 1.4309 | 1.2357 | 0.9120 |
| 7.8493 | 30000 | 1.4431 | 1.2330 | 0.9114 |
| 7.9801 | 30500 | 1.4266 | 1.2306 | 0.9127 |
| 8.1109 | 31000 | 1.4803 | 1.2328 | 0.9118 |
| 8.2418 | 31500 | 1.414 | 1.2345 | 0.9110 |
| 8.3726 | 32000 | 1.5456 | 1.2343 | 0.9116 |
| 8.5034 | 32500 | 1.346 | 1.2324 | 0.9118 |
| 8.6342 | 33000 | 1.4467 | 1.2315 | 0.9118 |
| 8.7650 | 33500 | 1.3864 | 1.2330 | 0.9119 |
| 8.8959 | 34000 | 1.4806 | 1.2277 | 0.9119 |
| 9.0267 | 34500 | 1.3381 | 1.2330 | 0.9119 |
| 9.1575 | 35000 | 1.5277 | 1.2315 | 0.9121 |
| 9.2883 | 35500 | 1.3966 | 1.2309 | 0.9112 |
| 9.4192 | 36000 | 1.4921 | 1.2321 | 0.9117 |
| 9.5500 | 36500 | 1.3668 | 1.2303 | 0.9118 |
| 9.6808 | 37000 | 1.4407 | 1.2308 | 0.9121 |
| 9.8116 | 37500 | 1.3852 | 1.2314 | 0.9118 |
| 9.9424 | 38000 | 1.4329 | 1.2300 | 0.9120 |
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.1.0.dev0
- Transformers: 4.42.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.20.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SUMMARIZATION",
"PARAPHRASING"
] | [
"PUBMEDQA",
"SCIFACT",
"SCIQ",
"SCITAIL"
] |
rjnClarke/sentence-transformers-all-MiniLM-L6-v2-fine-tuned | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10359",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T12:57:45 | 2024-08-06T12:57:57 | 51 | 0 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@3
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@200
- cosine_map@100
- dot_accuracy@3
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@200
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10359
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Cleopatra reacts to the news of Antony's death with a mixture of
sadness and resignation, contemplating her own mortality and the fickle nature
of life.
sentences:
- "Immortal longings in me. Now no more The juice of Egypt's grape shall moist\
\ this lip. Yare, yare, good Iras; quick. Methinks I hear Antony call. I\
\ see him rouse himself To praise my noble act. I hear him mock The luck\
\ of Caesar, which the gods give men To excuse their after wrath. Husband,\
\ I come. Now to that name my courage prove my title! I am fire and air;\
\ my other elements I give to baser life. So, have you done? Come then,\
\ and take the last warmth of my lips. Farewell, kind Charmian. Iras, long\
\ farewell. [Kisses them. IRAS falls and dies] \
\ Have I the aspic in my lips? Dost fall? If thus thou and nature can so gently\
\ part, The stroke of death is as a lover's pinch, Which hurts and is desir'd.\
\ Dost thou lie still? If thou vanishest, thou tell'st the world It is\
\ not worth leave-taking. CHARMIAN. Dissolve, thick cloud, and rain, that I may\
\ say The gods themselves do weep. CLEOPATRA. This proves me base.\n \
\ If she first meet the curled Antony,\n"
- "BURGUNDY. Warlike and martial Talbot, Burgundy\n Enshrines thee in his heart,\
\ and there erects Thy noble deeds as valour's monuments. TALBOT. Thanks,\
\ gentle Duke. But where is Pucelle now? I think her old familiar is asleep.\
\ Now where's the Bastard's braves, and Charles his gleeks? What, all amort?\
\ Rouen hangs her head for grief That such a valiant company are fled. Now\
\ will we take some order in the town, Placing therein some expert officers;\
\ And then depart to Paris to the King, For there young Henry with his nobles\
\ lie. BURGUNDY. What Lord Talbot pleaseth Burgundy. TALBOT. But yet, before\
\ we go, let's not forget The noble Duke of Bedford, late deceas'd, But\
\ see his exequies fulfill'd in Rouen. A braver soldier never couched lance,\
\ A gentler heart did never sway in court; But kings and mightiest potentates\
\ must die, For that's the end of human misery. Exeunt\n"
- "Your suffering in this dearth, you may as well\n Strike at the heaven with\
\ your staves as lift them Against the Roman state; whose course will on \
\ The way it takes, cracking ten thousand curbs Of more strong link asunder\
\ than can ever Appear in your impediment. For the dearth, The gods, not\
\ the patricians, make it, and Your knees to them, not arms, must help. Alack,\
\ You are transported by calamity Thither where more attends you; and you\
\ slander The helms o' th' state, who care for you like fathers, When you\
\ curse them as enemies. FIRST CITIZEN. Care for us! True, indeed! They ne'er\
\ car'd for us yet. Suffer us to famish, and their storehouses cramm'd with\
\ grain; make edicts for usury, to support usurers; repeal daily any wholesome\
\ act established against the rich, and provide more piercing statutes daily\
\ to chain up and restrain the poor. If the wars eat us not up, they will;\
\ and there's all the love they bear us. MENENIUS. Either you must Confess\
\ yourselves wondrous malicious, Or be accus'd of folly. I shall tell you \
\ A pretty tale. It may be you have heard it; But, since it serves my purpose,\
\ I will venture To stale't a little more. FIRST CITIZEN. Well, I'll hear\
\ it, sir; yet you must not think to fob off our disgrace with a tale. But,\
\ an't please you, deliver. MENENIUS. There was a time when all the body's members\
\ Rebell'd against the belly; thus accus'd it: That only like a gulf it\
\ did remain I' th' midst o' th' body, idle and unactive, Still cupboarding\
\ the viand, never bearing Like labour with the rest; where th' other instruments\
\ Did see and hear, devise, instruct, walk, feel,\n And, mutually participate,\
\ did minister\n"
- source_sentence: How does the excerpt reflect themes of loyalty and sacrifice in
the play?
sentences:
- "me a thousand marks in links and torches, walking with thee in\n the night\
\ betwixt tavern and tavern; but the sack that thou hast drunk me would have\
\ bought me lights as good cheap at the dearest chandler's in Europe. I have\
\ maintained that salamander of yours with fire any time this two-and-thirty\
\ years. God reward me for it! Bard. 'Sblood, I would my face were in your\
\ belly! Fal. God-a-mercy! so should I be sure to be heart-burn'd.\n \
\ Enter Hostess. How now, Dame Partlet the hen? Have you enquir'd\
\ yet who pick'd\n my pocket? Host. Why, Sir John, what do you think, Sir\
\ John? Do you think I keep thieves in my house? I have search'd, I have enquired,\
\ so has my husband, man by man, boy by boy, servant by servant. The tithe\
\ of a hair was never lost in my house before. Fal. Ye lie, hostess. Bardolph\
\ was shav'd and lost many a hair, and I'll be sworn my pocket was pick'd.\
\ Go to, you are a woman, go! Host. Who, I? No; I defy thee! God's light, I was\
\ never call'd so in mine own house before! Fal. Go to, I know you well enough.\
\ Host. No, Sir John; you do not know me, Sir John. I know you, Sir John.\
\ You owe me money, Sir John, and now you pick a quarrel to beguile me of\
\ it. I bought you a dozen of shirts to your back. Fal. Dowlas, filthy dowlas!\
\ I have given them away to bakers' wives; they have made bolters of them.\
\ Host. Now, as I am a true woman, holland of eight shillings an ell. You\
\ owe money here besides, Sir John, for your diet and by-drinkings, and money\
\ lent you, four-and-twenty pound. Fal. He had his part of it; let him pay. \
\ Host. He? Alas, he is poor; he hath nothing. Fal. How? Poor? Look upon his\
\ face. What call you rich? Let them coin his nose, let them coin his cheeks.\
\ I'll not pay a denier.\n What, will you make a younker of me? Shall I not\
\ take mine ease\n"
- "EDWARD. I wonder how our princely father scap'd,\n Or whether he be scap'd\
\ away or no From Clifford's and Northumberland's pursuit. Had he been ta'en,\
\ we should have heard the news; Had he been slain, we should have heard the\
\ news; Or had he scap'd, methinks we should have heard The happy tidings\
\ of his good escape. How fares my brother? Why is he so sad? RICHARD. I cannot\
\ joy until I be resolv'd Where our right valiant father is become. I saw\
\ him in the battle range about, And watch'd him how he singled Clifford forth.\
\ Methought he bore him in the thickest troop As doth a lion in a herd of\
\ neat;\n Or as a bear, encompass'd round with dogs,\n Who having pinch'd\
\ a few and made them cry, The rest stand all aloof and bark at him. So\
\ far'd our father with his enemies; So fled his enemies my warlike father.\
\ Methinks 'tis prize enough to be his son. See how the morning opes her\
\ golden gates And takes her farewell of the glorious sun. How well resembles\
\ it the prime of youth, Trimm'd like a younker prancing to his love! EDWARD.\
\ Dazzle mine eyes, or do I see three suns? RICHARD. Three glorious suns, each\
\ one a perfect sun; Not separated with the racking clouds, But sever'd\
\ in a pale clear-shining sky. See, see! they join, embrace, and seem to kiss,\
\ As if they vow'd some league inviolable. Now are they but one lamp, one\
\ light, one sun. In this the heaven figures some event. EDWARD. 'Tis wondrous\
\ strange, the like yet never heard of. I think it cites us, brother, to the\
\ field, That we, the sons of brave Plantagenet, Each one already blazing\
\ by our meeds, Should notwithstanding join our lights together And overshine\
\ the earth, as this the world. Whate'er it bodes, henceforward will I bear\
\ Upon my target three fair shining suns. RICHARD. Nay, bear three daughters-\
\ by your leave I speak it, You love the breeder better than the male.\n"
- "Forget that rarest treasure of your cheek,\n Exposing it- but, O, the harder\
\ heart! Alack, no remedy!- to the greedy touch Of common-kissing Titan,\
\ and forget Your laboursome and dainty trims wherein You made great Juno\
\ angry. IMOGEN. Nay, be brief; I see into thy end, and am almost A man\
\ already. PISANIO. First, make yourself but like one. Fore-thinking this,\
\ I have already fit- 'Tis in my cloak-bag- doublet, hat, hose, all That\
\ answer to them. Would you, in their serving, And with what imitation you\
\ can borrow From youth of such a season, fore noble Lucius Present yourself,\
\ desire his service, tell him Wherein you're happy- which will make him know\
\ If that his head have ear in music; doubtless With joy he will embrace\
\ you; for he's honourable, And, doubling that, most holy. Your means abroad-\
\ You have me, rich; and I will never fail Beginning nor supplyment. IMOGEN.\
\ Thou art all the comfort The gods will diet me with. Prithee away! There's\
\ more to be consider'd; but we'll even All that good time will give us. This\
\ attempt I am soldier to, and will abide it with A prince's courage. Away,\
\ I prithee. PISANIO. Well, madam, we must take a short farewell, Lest, being\
\ miss'd, I be suspected of Your carriage from the court. My noble mistress,\
\ Here is a box; I had it from the Queen. What's in't is precious. If you\
\ are sick at sea Or stomach-qualm'd at land, a dram of this\n Will drive\
\ away distemper. To some shade,\n And fit you to your manhood. May the gods\
\ Direct you to the best! IMOGEN. Amen. I thank thee. Exeunt\
\ severally\n"
- source_sentence: The excerpt showcases the emotional turmoil and sense of honor
that drives Brutus to take his own life in the face of defeat.
sentences:
- "Thou know'st that we two went to school together;\n Even for that our love\
\ of old, I prithee, Hold thou my sword-hilts, whilst I run on it. VOLUMNIUS.\
\ That's not an office for a friend, my lord. \
\ Alarum still. CLITUS. Fly, fly, my lord, there is no tarrying\
\ here. BRUTUS. Farewell to you, and you, and you, Volumnius. Strato, thou\
\ hast been all this while asleep; Farewell to thee too, Strato. Countrymen,\
\ My heart doth joy that yet in all my life I found no man but he was true\
\ to me. I shall have glory by this losing day, More than Octavius and Mark\
\ Antony By this vile conquest shall attain unto. So, fare you well at once,\
\ for Brutus' tongue Hath almost ended his life's history. Night hangs upon\
\ mine eyes, my bones would rest That have but labor'd to attain this hour.\
\ Alarum. Cry within, \"Fly, fly, fly!\" CLITUS. Fly,\
\ my lord, fly. BRUTUS. Hence! I will follow. Exeunt Clitus,\
\ Dardanius, and Volumnius. I prithee, Strato, stay thou by thy lord. Thou\
\ art a fellow of a good respect; Thy life hath had some smatch of honor in\
\ it. Hold then my sword, and turn away thy face, While I do run upon it.\
\ Wilt thou, Strato? STRATO. Give me your hand first. Fare you well, my lord.\
\ BRUTUS. Farewell, good Strato. Runs on his sword. Caesar,\
\ now be still; I kill'd not thee with half so good a will. Dies.\n\
\ Alarum. Retreat. Enter Octavius, Antony, Messala,\n Lucilius,\
\ and the Army.\n OCTAVIUS. What man is that?\n"
- "Elsinore. A room in the Castle.\nEnter King, Queen, Polonius, Ophelia, Rosencrantz,\
\ Guildenstern, and Lords. King. And can you by no drift of circumstance\n \
\ Get from him why he puts on this confusion, Grating so harshly all his days\
\ of quiet With turbulent and dangerous lunacy? Ros. He does confess he feels\
\ himself distracted, But from what cause he will by no means speak. Guil.\
\ Nor do we find him forward to be sounded, But with a crafty madness keeps\
\ aloof When we would bring him on to some confession Of his true state.\
\ Queen. Did he receive you well? Ros. Most like a gentleman. Guil. But with\
\ much forcing of his disposition. Ros. Niggard of question, but of our demands\
\ Most free in his reply. Queen. Did you assay him To any pastime? Ros.\
\ Madam, it so fell out that certain players\n We o'erraught on the way.\
\ Of these we told him,\n"
- "VII.\nThe French camp near Agincourt\nEnter the CONSTABLE OF FRANCE, the LORD\
\ RAMBURES, the DUKE OF ORLEANS,\nthe DAUPHIN, with others\n CONSTABLE. Tut!\
\ I have the best armour of the world.\n Would it were day! ORLEANS. You have\
\ an excellent armour; but let my horse have his due. CONSTABLE. It is the\
\ best horse of Europe. ORLEANS. Will it never be morning? DAUPHIN. My Lord\
\ of Orleans and my Lord High Constable, you talk of horse and armour? ORLEANS.\
\ You are as well provided of both as any prince in the world. DAUPHIN. What\
\ a long night is this! I will not change my horse with any that treads but\
\ on four pasterns. Ca, ha! he bounds from the earth as if his entrails were\
\ hairs; le cheval volant, the Pegasus, chez les narines de feu! When I bestride\
\ him I soar, I am a hawk. He trots the air; the earth sings when he touches\
\ it; the basest horn of his hoof is more musical than the pipe of Hermes.\
\ ORLEANS. He's of the colour of the nutmeg. DAUPHIN. And of the heat of the\
\ ginger. It is a beast for Perseus: he is pure air and fire; and the dull\
\ elements of earth and water never appear in him, but only in patient stillness\
\ while his rider mounts him; he is indeed a horse, and all other jades you\
\ may call beasts. CONSTABLE. Indeed, my lord, it is a most absolute and excellent\
\ horse.\n DAUPHIN. It is the prince of palfreys; his neigh is like the\n"
- source_sentence: What themes are present in the excerpt from the play?
sentences:
- "Enter TRAVERS NORTHUMBERLAND. Here comes my servant Travers, whom I sent\n \
\ On Tuesday last to listen after news. LORD BARDOLPH. My lord, I over-rode\
\ him on the way; And he is furnish'd with no certainties More than he haply\
\ may retail from me. NORTHUMBERLAND. Now, Travers, what good tidings comes with\
\ you? TRAVERS. My lord, Sir John Umfrevile turn'd me back With joyful tidings;\
\ and, being better hors'd, Out-rode me. After him came spurring hard A\
\ gentleman, almost forspent with speed, That stopp'd by me to breathe his\
\ bloodied horse. He ask'd the way to Chester; and of him I did demand what\
\ news from Shrewsbury. He told me that rebellion had bad luck, And that\
\ young Harry Percy's spur was cold. With that he gave his able horse the\
\ head And, bending forward, struck his armed heels\n Against the panting\
\ sides of his poor jade\n Up to the rowel-head; and starting so, He seem'd\
\ in running to devour the way, Staying no longer question. NORTHUMBERLAND.\
\ Ha! Again: Said he young Harry Percy's spur was cold? Of Hotspur, Coldspur?\
\ that rebellion Had met ill luck? LORD BARDOLPH. My lord, I'll tell you what:\
\ If my young lord your son have not the day, Upon mine honour, for a silken\
\ point I'll give my barony. Never talk of it. NORTHUMBERLAND. Why should\
\ that gentleman that rode by Travers Give then such instances of loss? LORD\
\ BARDOLPH. Who- he? He was some hilding fellow that had stol'n The horse\
\ he rode on and, upon my life, Spoke at a venture. Look, here comes more news.\
\ \n Enter Morton NORTHUMBERLAND. Yea, this man's brow,\
\ like to a title-leaf,\n"
- "ANTONY. Yet they are not join'd. Where yond pine does stand\n I shall discover\
\ all. I'll bring thee word Straight how 'tis like to go. \
\ Exit SCARUS. Swallows have built In Cleopatra's sails their nests.\
\ The augurers Say they know not, they cannot tell; look grimly, And dare\
\ not speak their knowledge. Antony Is valiant and dejected; and by starts\
\ His fretted fortunes give him hope and fear Of what he has and has not.\
\ [Alarum afar off, as at a sea-fight]\n \
\ Re-enter ANTONY ANTONY. All is lost!\n This foul Egyptian hath\
\ betrayed me. My fleet hath yielded to the foe, and yonder They cast\
\ their caps up and carouse together Like friends long lost. Triple-turn'd\
\ whore! 'tis thou\n Hast sold me to this novice; and my heart\n Makes\
\ only wars on thee. Bid them all fly; For when I am reveng'd upon my charm,\
\ I have done all. Bid them all fly; begone. Exit SCARUS O sun, thy\
\ uprise shall I see no more! Fortune and Antony part here; even here Do\
\ we shake hands. All come to this? The hearts That spaniel'd me at heels,\
\ to whom I gave Their wishes, do discandy, melt their sweets On blossoming\
\ Caesar; and this pine is bark'd That overtopp'd them all. Betray'd I am.\
\ O this false soul of Egypt! this grave charm- Whose eye beck'd forth my\
\ wars and call'd them home, Whose bosom was my crownet, my chief end- Like\
\ a right gypsy hath at fast and loose Beguil'd me to the very heart of loss.\
\ What, Eros, Eros! Enter CLEOPATRA\n Ah, thou spell!\
\ Avaunt!\n"
- "TALBOT. Saint George and victory! Fight, soldiers, fight.\n The Regent hath\
\ with Talbot broke his word And left us to the rage of France his sword. \
\ Where is John Talbot? Pause and take thy breath; I gave thee life and rescu'd\
\ thee from death. JOHN. O, twice my father, twice am I thy son! The life\
\ thou gav'st me first was lost and done Till with thy warlike sword, despite\
\ of fate, To my determin'd time thou gav'st new date. TALBOT. When from the\
\ Dauphin's crest thy sword struck fire, It warm'd thy father's heart with\
\ proud desire Of bold-fac'd victory. Then leaden age, Quicken'd with youthful\
\ spleen and warlike rage, Beat down Alencon, Orleans, Burgundy, And from\
\ the pride of Gallia rescued thee. The ireful bastard Orleans, that drew blood\
\ From thee, my boy, and had the maidenhood Of thy first fight, I soon encountered\
\ And, interchanging blows, I quickly shed Some of his bastard blood; and\
\ in disgrace\n Bespoke him thus: 'Contaminated, base,\n"
- source_sentence: What is the significance of the tennis balls in the excerpt from
the play?
sentences:
- "My fault is past. But, O, what form of prayer\n Can serve my turn? 'Forgive\
\ me my foul murther'? That cannot be; since I am still possess'd Of those\
\ effects for which I did the murther- My crown, mine own ambition, and my\
\ queen. May one be pardon'd and retain th' offence? In the corrupted currents\
\ of this world Offence's gilded hand may shove by justice, And oft 'tis\
\ seen the wicked prize itself Buys out the law; but 'tis not so above. \
\ There is no shuffling; there the action lies In his true nature, and we ourselves\
\ compell'd, Even to the teeth and forehead of our faults, To give in evidence.\
\ What then? What rests? Try what repentance can. What can it not? Yet what\
\ can it when one cannot repent? O wretched state! O bosom black as death!\
\ O limed soul, that, struggling to be free, Art more engag'd! Help, angels!\
\ Make assay. Bow, stubborn knees; and heart with strings of steel, Be\
\ soft as sinews of the new-born babe! All may be well. \
\ He kneels.\n Enter Hamlet. Ham. Now might\
\ I do it pat, now he is praying;\n And now I'll do't. And so he goes to heaven,\
\ And so am I reveng'd. That would be scann'd. A villain kills my father;\
\ and for that, I, his sole son, do this same villain send To heaven. \
\ Why, this is hire and salary, not revenge! He took my father grossly, full\
\ of bread, With all his crimes broad blown, as flush as May; And how his\
\ audit stands, who knows save heaven?\n But in our circumstance and course\
\ of thought,\n"
- "YORK. From Ireland thus comes York to claim his right\n And pluck the crown\
\ from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright,\
\ To entertain great England's lawful king. Ah, sancta majestas! who would\
\ not buy thee dear? Let them obey that knows not how to rule; This hand\
\ was made to handle nought but gold. I cannot give due action to my words\
\ Except a sword or sceptre balance it.\n A sceptre shall it have, have\
\ I a soul\n On which I'll toss the flower-de-luce of France.\n \
\ Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb\
\ me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York,\
\ if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept\
\ thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger\
\ from Henry, our dread liege, To know the reason of these arms in peace; \
\ Or why thou, being a subject as I am, Against thy oath and true allegiance\
\ sworn, Should raise so great a power without his leave, Or dare to bring\
\ thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is\
\ so great. O, I could hew up rocks and fight with flint, I am so angry\
\ at these abject terms; And now, like Ajax Telamonius, On sheep or oxen\
\ could I spend my fury. I am far better born than is the King, More like\
\ a king, more kingly in my thoughts; But I must make fair weather yet awhile,\
\ Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon\
\ me That I have given no answer all this while; My mind was troubled with\
\ deep melancholy. The cause why I have brought this army hither Is to\
\ remove proud Somerset from the King, Seditious to his Grace and to the state.\
\ BUCKINGHAM. That is too much presumption on thy part; But if thy arms be\
\ to no other end, The King hath yielded unto thy demand:\n The Duke of\
\ Somerset is in the Tower.\n"
- "Says that you savour too much of your youth,\n And bids you be advis'd there's\
\ nought in France That can be with a nimble galliard won; You cannot revel\
\ into dukedoms there. He therefore sends you, meeter for your spirit, This\
\ tun of treasure; and, in lieu of this, Desires you let the dukedoms that\
\ you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What\
\ treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the\
\ Dauphin is so pleasant with us; His present and your pains we thank you for.\
\ When we have match'd our rackets to these balls, We will in France,\
\ by God's grace, play a set Shall strike his father's crown into the hazard.\
\ Tell him he hath made a match with such a wrangler That all the courts\
\ of France will be disturb'd With chaces. And we understand him well, How\
\ he comes o'er us with our wilder days, Not measuring what use we made of\
\ them. We never valu'd this poor seat of England; And therefore, living\
\ hence, did give ourself To barbarous licence; as 'tis ever common That\
\ men are merriest when they are from home. But tell the Dauphin I will keep\
\ my state, Be like a king, and show my sail of greatness, When I do rouse\
\ me in my throne of France; For that I have laid by my majesty And plodded\
\ like a man for working-days; But I will rise there with so full a glory \
\ That I will dazzle all the eyes of France, Yea, strike the Dauphin blind\
\ to look on us. And tell the pleasant Prince this mock of his Hath turn'd\
\ his balls to gun-stones, and his soul Shall stand sore charged for the wasteful\
\ vengeance\n That shall fly with them; for many a thousand widows\n"
model-index:
- name: RAG_general/rerank/models/sentence-transformers-all-MiniLM-L6-v2-ft
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: mini dev
type: mini-dev
metrics:
- type: cosine_accuracy@3
value: 0.4582971329278888
name: Cosine Accuracy@3
- type: cosine_precision@1
value: 0.342745438748914
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.15276571097596292
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10139009556907037
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.056298870547350124
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.342745438748914
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4582971329278888
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5069504778453519
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5629887054735013
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4482222879991295
name: Cosine Ndcg@10
- type: cosine_mrr@200
value: 0.41834905952135354
name: Cosine Mrr@200
- type: cosine_map@100
value: 0.4180307788743427
name: Cosine Map@100
- type: dot_accuracy@3
value: 0.4582971329278888
name: Dot Accuracy@3
- type: dot_precision@1
value: 0.342745438748914
name: Dot Precision@1
- type: dot_precision@3
value: 0.15276571097596292
name: Dot Precision@3
- type: dot_precision@5
value: 0.10139009556907037
name: Dot Precision@5
- type: dot_precision@10
value: 0.056298870547350124
name: Dot Precision@10
- type: dot_recall@1
value: 0.342745438748914
name: Dot Recall@1
- type: dot_recall@3
value: 0.4582971329278888
name: Dot Recall@3
- type: dot_recall@5
value: 0.5069504778453519
name: Dot Recall@5
- type: dot_recall@10
value: 0.5629887054735013
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4482222879991295
name: Dot Ndcg@10
- type: dot_mrr@200
value: 0.41834905952135354
name: Dot Mrr@200
- type: dot_map@100
value: 0.4180307788743427
name: Dot Map@100
---
# RAG_general/rerank/models/sentence-transformers-all-MiniLM-L6-v2-ft
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision 8b3219a92973c328a8e22fadcfa821b5dc75636a -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/sentence-transformers-all-MiniLM-L6-v2-fine-tuned")
# Run inference
sentences = [
'What is the significance of the tennis balls in the excerpt from the play?',
"Says that you savour too much of your youth,\n And bids you be advis'd there's nought in France That can be with a nimble galliard won; You cannot revel into dukedoms there. He therefore sends you, meeter for your spirit, This tun of treasure; and, in lieu of this, Desires you let the dukedoms that you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the Dauphin is so pleasant with us; His present and your pains we thank you for. When we have match'd our rackets to these balls, We will in France, by God's grace, play a set Shall strike his father's crown into the hazard. Tell him he hath made a match with such a wrangler That all the courts of France will be disturb'd With chaces. And we understand him well, How he comes o'er us with our wilder days, Not measuring what use we made of them. We never valu'd this poor seat of England; And therefore, living hence, did give ourself To barbarous licence; as 'tis ever common That men are merriest when they are from home. But tell the Dauphin I will keep my state, Be like a king, and show my sail of greatness, When I do rouse me in my throne of France; For that I have laid by my majesty And plodded like a man for working-days; But I will rise there with so full a glory That I will dazzle all the eyes of France, Yea, strike the Dauphin blind to look on us. And tell the pleasant Prince this mock of his Hath turn'd his balls to gun-stones, and his soul Shall stand sore charged for the wasteful vengeance\n That shall fly with them; for many a thousand widows\n",
"YORK. From Ireland thus comes York to claim his right\n And pluck the crown from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright, To entertain great England's lawful king. Ah, sancta majestas! who would not buy thee dear? Let them obey that knows not how to rule; This hand was made to handle nought but gold. I cannot give due action to my words Except a sword or sceptre balance it.\n A sceptre shall it have, have I a soul\n On which I'll toss the flower-de-luce of France.\n Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York, if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger from Henry, our dread liege, To know the reason of these arms in peace; Or why thou, being a subject as I am, Against thy oath and true allegiance sworn, Should raise so great a power without his leave, Or dare to bring thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is so great. O, I could hew up rocks and fight with flint, I am so angry at these abject terms; And now, like Ajax Telamonius, On sheep or oxen could I spend my fury. I am far better born than is the King, More like a king, more kingly in my thoughts; But I must make fair weather yet awhile, Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon me That I have given no answer all this while; My mind was troubled with deep melancholy. The cause why I have brought this army hither Is to remove proud Somerset from the King, Seditious to his Grace and to the state. BUCKINGHAM. That is too much presumption on thy part; But if thy arms be to no other end, The King hath yielded unto thy demand:\n The Duke of Somerset is in the Tower.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `mini-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@3 | 0.4583 |
| cosine_precision@1 | 0.3427 |
| cosine_precision@3 | 0.1528 |
| cosine_precision@5 | 0.1014 |
| cosine_precision@10 | 0.0563 |
| cosine_recall@1 | 0.3427 |
| cosine_recall@3 | 0.4583 |
| cosine_recall@5 | 0.507 |
| cosine_recall@10 | 0.563 |
| cosine_ndcg@10 | 0.4482 |
| cosine_mrr@200 | 0.4183 |
| **cosine_map@100** | **0.418** |
| dot_accuracy@3 | 0.4583 |
| dot_precision@1 | 0.3427 |
| dot_precision@3 | 0.1528 |
| dot_precision@5 | 0.1014 |
| dot_precision@10 | 0.0563 |
| dot_recall@1 | 0.3427 |
| dot_recall@3 | 0.4583 |
| dot_recall@5 | 0.507 |
| dot_recall@10 | 0.563 |
| dot_ndcg@10 | 0.4482 |
| dot_mrr@200 | 0.4183 |
| dot_map@100 | 0.418 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,359 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 22.32 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 238.33 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is the general being described in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>What is the main conflict highlighted in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>The excerpt showcases the tension between Antony's loyalty to Cleopatra and his obligations to Caesar, as well as Cleopatra's influence over him.</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,302 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 21.73 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 239.59 tokens</li><li>max: 256 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The excerpt highlights the tension between Antony's loyalty to Cleopatra and his standing in Rome, showcasing the intricate balance of power and love in the play.</code> | <code>When shrill-tongu'd Fulvia scolds. The messengers!<br> ANTONY. Let Rome in Tiber melt, and the wide arch Of the rang'd empire fall! Here is my space. Kingdoms are clay; our dungy earth alike Feeds beast as man. The nobleness of life Is to do thus [emhracing], when such a mutual pair And such a twain can do't, in which I bind, On pain of punishment, the world to weet We stand up peerless. CLEOPATRA. Excellent falsehood! Why did he marry Fulvia, and not love her? I'll seem the fool I am not. Antony Will be himself. ANTONY. But stirr'd by Cleopatra. Now for the love of Love and her soft hours, Let's not confound the time with conference harsh; There's not a minute of our lives should stretch Without some pleasure now. What sport to-night? CLEOPATRA. Hear the ambassadors. ANTONY. Fie, wrangling queen! Whom everything becomes- to chide, to laugh, To weep; whose every passion fully strives To make itself in thee fair and admir'd. No messenger but thine, and all alone To-night we'll wander through the streets and note The qualities of people. Come, my queen; Last night you did desire it. Speak not to us. Exeunt ANTONY and CLEOPATRA, with the train DEMETRIUS. Is Caesar with Antonius priz'd so slight? PHILO. Sir, sometimes when he is not Antony, He comes too short of that great property Which still should go with Antony. DEMETRIUS. I am full sorry That he approves the common liar, who Thus speaks of him at Rome; but I will hope<br> Of better deeds to-morrow. Rest you happy! Exeunt<br></code> |
| <code>What is the significance of the soothsayer in the context of the play?</code> | <code>CHARMIAN. Lord Alexas, sweet Alexas, most anything Alexas, almost<br> most absolute Alexas, where's the soothsayer that you prais'd so to th' Queen? O that I knew this husband, which you say must charge his horns with garlands! ALEXAS. Soothsayer! SOOTHSAYER. Your will? CHARMIAN. Is this the man? Is't you, sir, that know things? SOOTHSAYER. In nature's infinite book of secrecy A little I can read. ALEXAS. Show him your hand.<br> Enter ENOBARBUS ENOBARBUS. Bring in the banquet quickly; wine enough<br> Cleopatra's health to drink. CHARMIAN. Good, sir, give me good fortune. SOOTHSAYER. I make not, but foresee. CHARMIAN. Pray, then, foresee me one. SOOTHSAYER. You shall be yet far fairer than you are. CHARMIAN. He means in flesh. IRAS. No, you shall paint when you are old. CHARMIAN. Wrinkles forbid! ALEXAS. Vex not his prescience; be attentive. CHARMIAN. Hush!<br> SOOTHSAYER. You shall be more beloving than beloved.<br></code> |
| <code>What is the setting of the scene in which the excerpt takes place?</code> | <code>sweet Isis, I beseech thee! And let her die too, and give him a<br> worse! And let worse follow worse, till the worst of all follow him laughing to his grave, fiftyfold a cuckold! Good Isis, hear me this prayer, though thou deny me a matter of more weight; good Isis, I beseech thee! IRAS. Amen. Dear goddess, hear that prayer of the people! For, as it is a heartbreaking to see a handsome man loose-wiv'd, so it is a deadly sorrow to behold a foul knave uncuckolded. Therefore, dear Isis, keep decorum, and fortune him accordingly! CHARMIAN. Amen. ALEXAS. Lo now, if it lay in their hands to make me a cuckold, they would make themselves whores but they'ld do't!<br> Enter CLEOPATRA ENOBARBUS. Hush! Here comes Antony.<br> CHARMIAN. Not he; the Queen. CLEOPATRA. Saw you my lord? ENOBARBUS. No, lady. CLEOPATRA. Was he not here? CHARMIAN. No, madam. CLEOPATRA. He was dispos'd to mirth; but on the sudden A Roman thought hath struck him. Enobarbus! ENOBARBUS. Madam? CLEOPATRA. Seek him, and bring him hither. Where's Alexas? ALEXAS. Here, at your service. My lord approaches.<br> Enter ANTONY, with a MESSENGER and attendants CLEOPATRA. We will not look upon him. Go with us.<br> Exeunt CLEOPATRA, ENOBARBUS, and the rest MESSENGER. Fulvia thy wife first came into the field. ANTONY. Against my brother Lucius? MESSENGER. Ay.<br> But soon that war had end, and the time's state<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `num_train_epochs`: 7
- `warmup_steps`: 50
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 7
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | mini-dev_cosine_map@100 |
|:-------:|:--------:|:-------------:|:---------:|:-----------------------:|
| 1.0 | 324 | - | 1.9598 | 0.3728 |
| 1.5432 | 500 | 2.1523 | - | - |
| 2.0 | 648 | - | 1.8067 | 0.4023 |
| 3.0 | 972 | - | 1.7600 | 0.4144 |
| 3.0864 | 1000 | 1.4271 | - | - |
| **4.0** | **1296** | **-** | **1.746** | **0.418** |
| 4.6296 | 1500 | 0.9807 | - | - |
| 5.0 | 1620 | - | 1.7604 | 0.4146 |
| 6.0 | 1944 | - | 1.7558 | 0.4153 |
| 6.1728 | 2000 | 0.7846 | - | - |
| 7.0 | 2268 | - | 1.7571 | 0.4180 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
Muennighoff/SGPT-5.8B-weightedmean-nli-bitfit | Muennighoff | sentence-similarity | [
"sentence-transformers",
"pytorch",
"gptj",
"feature-extraction",
"sentence-similarity",
"mteb",
"arxiv:2202.08904",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2022-10-03T12:16:09 | 50 | 6 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
model-index:
- name: SGPT-5.8B-weightedmean-nli-bitfit
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 74.07462686567165
- type: ap
value: 37.44692407529112
- type: f1
value: 68.28971003916419
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 66.63811563169165
- type: ap
value: 78.57252079915924
- type: f1
value: 64.5543087846584
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 77.21889055472263
- type: ap
value: 25.663426367826712
- type: f1
value: 64.26265688503176
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: 2d8a100785abf0ae21420d2a55b0c56e3e1ea996
metrics:
- type: accuracy
value: 58.06209850107067
- type: ap
value: 14.028219107023915
- type: f1
value: 48.10387189660778
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: 80714f8dcf8cefc218ef4f8c5a966dd83f75a0e1
metrics:
- type: accuracy
value: 82.30920000000002
- type: ap
value: 76.88786578621213
- type: f1
value: 82.15455656065011
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 41.584
- type: f1
value: 41.203137944390114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 35.288000000000004
- type: f1
value: 34.672995558518096
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 38.34
- type: f1
value: 37.608755629529455
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 37.839999999999996
- type: f1
value: 36.86898201563507
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 30.936000000000003
- type: f1
value: 30.49401738527071
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: c379a6705fec24a2493fa68e011692605f44e119
metrics:
- type: accuracy
value: 33.75
- type: f1
value: 33.38338946025617
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: 5b3e3697907184a9b77a3c99ee9ea1a9cbb1e4e3
metrics:
- type: map_at_1
value: 13.727
- type: map_at_10
value: 26.740000000000002
- type: map_at_100
value: 28.218
- type: map_at_1000
value: 28.246
- type: map_at_3
value: 21.728
- type: map_at_5
value: 24.371000000000002
- type: ndcg_at_1
value: 13.727
- type: ndcg_at_10
value: 35.07
- type: ndcg_at_100
value: 41.947
- type: ndcg_at_1000
value: 42.649
- type: ndcg_at_3
value: 24.484
- type: ndcg_at_5
value: 29.282999999999998
- type: precision_at_1
value: 13.727
- type: precision_at_10
value: 6.223
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 10.835
- type: precision_at_5
value: 8.848
- type: recall_at_1
value: 13.727
- type: recall_at_10
value: 62.233000000000004
- type: recall_at_100
value: 93.67
- type: recall_at_1000
value: 99.14699999999999
- type: recall_at_3
value: 32.504
- type: recall_at_5
value: 44.239
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: 0bbdb47bcbe3a90093699aefeed338a0f28a7ee8
metrics:
- type: v_measure
value: 40.553923271901695
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: b73bd54100e5abfa6e3a23dcafb46fe4d2438dc3
metrics:
- type: v_measure
value: 32.49323183712211
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 4d853f94cd57d85ec13805aeeac3ae3e5eb4c49c
metrics:
- type: map
value: 55.89811361443445
- type: mrr
value: 70.16235764850724
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: 9ee918f184421b6bd48b78f6c714d86546106103
metrics:
- type: cos_sim_pearson
value: 82.50506557805856
- type: cos_sim_spearman
value: 79.50000423261176
- type: euclidean_pearson
value: 75.76190885392926
- type: euclidean_spearman
value: 76.7330737163434
- type: manhattan_pearson
value: 75.825318036112
- type: manhattan_spearman
value: 76.7415076434559
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 75.49060542797494
- type: f1
value: 75.15379262352123
- type: precision
value: 74.99391092553932
- type: recall
value: 75.49060542797494
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.4182258419546555
- type: f1
value: 0.4182258419546555
- type: precision
value: 0.4182258419546555
- type: recall
value: 0.4182258419546555
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.013855213023900243
- type: f1
value: 0.0115460108532502
- type: precision
value: 0.010391409767925183
- type: recall
value: 0.013855213023900243
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 0.315955766192733
- type: f1
value: 0.315955766192733
- type: precision
value: 0.315955766192733
- type: recall
value: 0.315955766192733
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 44fa15921b4c889113cc5df03dd4901b49161ab7
metrics:
- type: accuracy
value: 81.74025974025973
- type: f1
value: 81.66568824876
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 11d0121201d1f1f280e8cc8f3d98fb9c4d9f9c55
metrics:
- type: v_measure
value: 33.59451202614059
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: c0fab014e1bcb8d3a5e31b2088972a1e01547dc1
metrics:
- type: v_measure
value: 29.128241446157165
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 2b9f5791698b5be7bc5e10535c8690f20043c3db
metrics:
- type: map_at_1
value: 26.715
- type: map_at_10
value: 35.007
- type: map_at_100
value: 36.352000000000004
- type: map_at_1000
value: 36.51
- type: map_at_3
value: 32.257999999999996
- type: map_at_5
value: 33.595000000000006
- type: ndcg_at_1
value: 33.906
- type: ndcg_at_10
value: 40.353
- type: ndcg_at_100
value: 45.562999999999995
- type: ndcg_at_1000
value: 48.454
- type: ndcg_at_3
value: 36.349
- type: ndcg_at_5
value: 37.856
- type: precision_at_1
value: 33.906
- type: precision_at_10
value: 7.854
- type: precision_at_100
value: 1.29
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 17.549
- type: precision_at_5
value: 12.561
- type: recall_at_1
value: 26.715
- type: recall_at_10
value: 49.508
- type: recall_at_100
value: 71.76599999999999
- type: recall_at_1000
value: 91.118
- type: recall_at_3
value: 37.356
- type: recall_at_5
value: 41.836
- type: map_at_1
value: 19.663
- type: map_at_10
value: 27.086
- type: map_at_100
value: 28.066999999999997
- type: map_at_1000
value: 28.18
- type: map_at_3
value: 24.819
- type: map_at_5
value: 26.332
- type: ndcg_at_1
value: 25.732
- type: ndcg_at_10
value: 31.613999999999997
- type: ndcg_at_100
value: 35.757
- type: ndcg_at_1000
value: 38.21
- type: ndcg_at_3
value: 28.332
- type: ndcg_at_5
value: 30.264000000000003
- type: precision_at_1
value: 25.732
- type: precision_at_10
value: 6.038
- type: precision_at_100
value: 1.034
- type: precision_at_1000
value: 0.149
- type: precision_at_3
value: 13.864
- type: precision_at_5
value: 10.241999999999999
- type: recall_at_1
value: 19.663
- type: recall_at_10
value: 39.585
- type: recall_at_100
value: 57.718
- type: recall_at_1000
value: 74.26700000000001
- type: recall_at_3
value: 29.845
- type: recall_at_5
value: 35.105
- type: map_at_1
value: 30.125
- type: map_at_10
value: 39.824
- type: map_at_100
value: 40.935
- type: map_at_1000
value: 41.019
- type: map_at_3
value: 37.144
- type: map_at_5
value: 38.647999999999996
- type: ndcg_at_1
value: 34.922
- type: ndcg_at_10
value: 45.072
- type: ndcg_at_100
value: 50.046
- type: ndcg_at_1000
value: 51.895
- type: ndcg_at_3
value: 40.251
- type: ndcg_at_5
value: 42.581
- type: precision_at_1
value: 34.922
- type: precision_at_10
value: 7.303999999999999
- type: precision_at_100
value: 1.0739999999999998
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 17.994
- type: precision_at_5
value: 12.475999999999999
- type: recall_at_1
value: 30.125
- type: recall_at_10
value: 57.253
- type: recall_at_100
value: 79.35799999999999
- type: recall_at_1000
value: 92.523
- type: recall_at_3
value: 44.088
- type: recall_at_5
value: 49.893
- type: map_at_1
value: 16.298000000000002
- type: map_at_10
value: 21.479
- type: map_at_100
value: 22.387
- type: map_at_1000
value: 22.483
- type: map_at_3
value: 19.743
- type: map_at_5
value: 20.444000000000003
- type: ndcg_at_1
value: 17.740000000000002
- type: ndcg_at_10
value: 24.887
- type: ndcg_at_100
value: 29.544999999999998
- type: ndcg_at_1000
value: 32.417
- type: ndcg_at_3
value: 21.274
- type: ndcg_at_5
value: 22.399
- type: precision_at_1
value: 17.740000000000002
- type: precision_at_10
value: 3.932
- type: precision_at_100
value: 0.666
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 8.927
- type: precision_at_5
value: 6.056
- type: recall_at_1
value: 16.298000000000002
- type: recall_at_10
value: 34.031
- type: recall_at_100
value: 55.769000000000005
- type: recall_at_1000
value: 78.19500000000001
- type: recall_at_3
value: 23.799999999999997
- type: recall_at_5
value: 26.562
- type: map_at_1
value: 10.958
- type: map_at_10
value: 16.999
- type: map_at_100
value: 17.979
- type: map_at_1000
value: 18.112000000000002
- type: map_at_3
value: 15.010000000000002
- type: map_at_5
value: 16.256999999999998
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 20.985
- type: ndcg_at_100
value: 26.216
- type: ndcg_at_1000
value: 29.675
- type: ndcg_at_3
value: 17.28
- type: ndcg_at_5
value: 19.301
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.968
- type: precision_at_100
value: 0.784
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 8.541
- type: precision_at_5
value: 6.468
- type: recall_at_1
value: 10.958
- type: recall_at_10
value: 29.903000000000002
- type: recall_at_100
value: 53.413
- type: recall_at_1000
value: 78.74799999999999
- type: recall_at_3
value: 19.717000000000002
- type: recall_at_5
value: 24.817
- type: map_at_1
value: 21.217
- type: map_at_10
value: 29.677
- type: map_at_100
value: 30.928
- type: map_at_1000
value: 31.063000000000002
- type: map_at_3
value: 26.611
- type: map_at_5
value: 28.463
- type: ndcg_at_1
value: 26.083000000000002
- type: ndcg_at_10
value: 35.217
- type: ndcg_at_100
value: 40.715
- type: ndcg_at_1000
value: 43.559
- type: ndcg_at_3
value: 30.080000000000002
- type: ndcg_at_5
value: 32.701
- type: precision_at_1
value: 26.083000000000002
- type: precision_at_10
value: 6.622
- type: precision_at_100
value: 1.115
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 14.629
- type: precision_at_5
value: 10.837
- type: recall_at_1
value: 21.217
- type: recall_at_10
value: 47.031
- type: recall_at_100
value: 70.378
- type: recall_at_1000
value: 89.704
- type: recall_at_3
value: 32.427
- type: recall_at_5
value: 39.31
- type: map_at_1
value: 19.274
- type: map_at_10
value: 26.398
- type: map_at_100
value: 27.711000000000002
- type: map_at_1000
value: 27.833000000000002
- type: map_at_3
value: 24.294
- type: map_at_5
value: 25.385
- type: ndcg_at_1
value: 24.886
- type: ndcg_at_10
value: 30.909
- type: ndcg_at_100
value: 36.941
- type: ndcg_at_1000
value: 39.838
- type: ndcg_at_3
value: 27.455000000000002
- type: ndcg_at_5
value: 28.828
- type: precision_at_1
value: 24.886
- type: precision_at_10
value: 5.6739999999999995
- type: precision_at_100
value: 1.0290000000000001
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 13.242
- type: precision_at_5
value: 9.292
- type: recall_at_1
value: 19.274
- type: recall_at_10
value: 39.643
- type: recall_at_100
value: 66.091
- type: recall_at_1000
value: 86.547
- type: recall_at_3
value: 29.602
- type: recall_at_5
value: 33.561
- type: map_at_1
value: 18.653666666666666
- type: map_at_10
value: 25.606666666666666
- type: map_at_100
value: 26.669333333333334
- type: map_at_1000
value: 26.795833333333334
- type: map_at_3
value: 23.43433333333333
- type: map_at_5
value: 24.609666666666666
- type: ndcg_at_1
value: 22.742083333333333
- type: ndcg_at_10
value: 29.978333333333335
- type: ndcg_at_100
value: 34.89808333333333
- type: ndcg_at_1000
value: 37.806583333333336
- type: ndcg_at_3
value: 26.223666666666674
- type: ndcg_at_5
value: 27.91033333333333
- type: precision_at_1
value: 22.742083333333333
- type: precision_at_10
value: 5.397083333333334
- type: precision_at_100
value: 0.9340000000000002
- type: precision_at_1000
value: 0.13691666666666663
- type: precision_at_3
value: 12.331083333333332
- type: precision_at_5
value: 8.805499999999999
- type: recall_at_1
value: 18.653666666666666
- type: recall_at_10
value: 39.22625000000001
- type: recall_at_100
value: 61.31049999999999
- type: recall_at_1000
value: 82.19058333333334
- type: recall_at_3
value: 28.517333333333333
- type: recall_at_5
value: 32.9565
- type: map_at_1
value: 16.07
- type: map_at_10
value: 21.509
- type: map_at_100
value: 22.335
- type: map_at_1000
value: 22.437
- type: map_at_3
value: 19.717000000000002
- type: map_at_5
value: 20.574
- type: ndcg_at_1
value: 18.865000000000002
- type: ndcg_at_10
value: 25.135999999999996
- type: ndcg_at_100
value: 29.483999999999998
- type: ndcg_at_1000
value: 32.303
- type: ndcg_at_3
value: 21.719
- type: ndcg_at_5
value: 23.039
- type: precision_at_1
value: 18.865000000000002
- type: precision_at_10
value: 4.263999999999999
- type: precision_at_100
value: 0.696
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 9.866999999999999
- type: precision_at_5
value: 6.902
- type: recall_at_1
value: 16.07
- type: recall_at_10
value: 33.661
- type: recall_at_100
value: 54.001999999999995
- type: recall_at_1000
value: 75.564
- type: recall_at_3
value: 23.956
- type: recall_at_5
value: 27.264
- type: map_at_1
value: 10.847
- type: map_at_10
value: 15.518
- type: map_at_100
value: 16.384
- type: map_at_1000
value: 16.506
- type: map_at_3
value: 14.093
- type: map_at_5
value: 14.868
- type: ndcg_at_1
value: 13.764999999999999
- type: ndcg_at_10
value: 18.766
- type: ndcg_at_100
value: 23.076
- type: ndcg_at_1000
value: 26.344
- type: ndcg_at_3
value: 16.150000000000002
- type: ndcg_at_5
value: 17.373
- type: precision_at_1
value: 13.764999999999999
- type: precision_at_10
value: 3.572
- type: precision_at_100
value: 0.6779999999999999
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 7.88
- type: precision_at_5
value: 5.712
- type: recall_at_1
value: 10.847
- type: recall_at_10
value: 25.141999999999996
- type: recall_at_100
value: 44.847
- type: recall_at_1000
value: 68.92099999999999
- type: recall_at_3
value: 17.721999999999998
- type: recall_at_5
value: 20.968999999999998
- type: map_at_1
value: 18.377
- type: map_at_10
value: 26.005
- type: map_at_100
value: 26.996
- type: map_at_1000
value: 27.116
- type: map_at_3
value: 23.712
- type: map_at_5
value: 24.859
- type: ndcg_at_1
value: 22.201
- type: ndcg_at_10
value: 30.635
- type: ndcg_at_100
value: 35.623
- type: ndcg_at_1000
value: 38.551
- type: ndcg_at_3
value: 26.565
- type: ndcg_at_5
value: 28.28
- type: precision_at_1
value: 22.201
- type: precision_at_10
value: 5.41
- type: precision_at_100
value: 0.88
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 12.531
- type: precision_at_5
value: 8.806
- type: recall_at_1
value: 18.377
- type: recall_at_10
value: 40.908
- type: recall_at_100
value: 63.563
- type: recall_at_1000
value: 84.503
- type: recall_at_3
value: 29.793999999999997
- type: recall_at_5
value: 34.144999999999996
- type: map_at_1
value: 20.246
- type: map_at_10
value: 27.528000000000002
- type: map_at_100
value: 28.78
- type: map_at_1000
value: 29.002
- type: map_at_3
value: 25.226
- type: map_at_5
value: 26.355
- type: ndcg_at_1
value: 25.099
- type: ndcg_at_10
value: 32.421
- type: ndcg_at_100
value: 37.2
- type: ndcg_at_1000
value: 40.693
- type: ndcg_at_3
value: 28.768
- type: ndcg_at_5
value: 30.23
- type: precision_at_1
value: 25.099
- type: precision_at_10
value: 6.245
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 13.767999999999999
- type: precision_at_5
value: 9.881
- type: recall_at_1
value: 20.246
- type: recall_at_10
value: 41.336
- type: recall_at_100
value: 63.098
- type: recall_at_1000
value: 86.473
- type: recall_at_3
value: 30.069000000000003
- type: recall_at_5
value: 34.262
- type: map_at_1
value: 14.054
- type: map_at_10
value: 20.25
- type: map_at_100
value: 21.178
- type: map_at_1000
value: 21.288999999999998
- type: map_at_3
value: 18.584999999999997
- type: map_at_5
value: 19.536
- type: ndcg_at_1
value: 15.527
- type: ndcg_at_10
value: 23.745
- type: ndcg_at_100
value: 28.610999999999997
- type: ndcg_at_1000
value: 31.740000000000002
- type: ndcg_at_3
value: 20.461
- type: ndcg_at_5
value: 22.072
- type: precision_at_1
value: 15.527
- type: precision_at_10
value: 3.882
- type: precision_at_100
value: 0.6930000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 9.181000000000001
- type: precision_at_5
value: 6.433
- type: recall_at_1
value: 14.054
- type: recall_at_10
value: 32.714
- type: recall_at_100
value: 55.723
- type: recall_at_1000
value: 79.72399999999999
- type: recall_at_3
value: 23.832
- type: recall_at_5
value: 27.754
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: 392b78eb68c07badcd7c2cd8f39af108375dfcce
metrics:
- type: map_at_1
value: 6.122
- type: map_at_10
value: 11.556
- type: map_at_100
value: 12.998000000000001
- type: map_at_1000
value: 13.202
- type: map_at_3
value: 9.657
- type: map_at_5
value: 10.585
- type: ndcg_at_1
value: 15.049000000000001
- type: ndcg_at_10
value: 17.574
- type: ndcg_at_100
value: 24.465999999999998
- type: ndcg_at_1000
value: 28.511999999999997
- type: ndcg_at_3
value: 13.931
- type: ndcg_at_5
value: 15.112
- type: precision_at_1
value: 15.049000000000001
- type: precision_at_10
value: 5.831
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.20500000000000002
- type: precision_at_3
value: 10.749
- type: precision_at_5
value: 8.365
- type: recall_at_1
value: 6.122
- type: recall_at_10
value: 22.207
- type: recall_at_100
value: 47.08
- type: recall_at_1000
value: 70.182
- type: recall_at_3
value: 13.416
- type: recall_at_5
value: 16.672
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: f097057d03ed98220bc7309ddb10b71a54d667d6
metrics:
- type: map_at_1
value: 4.672
- type: map_at_10
value: 10.534
- type: map_at_100
value: 14.798
- type: map_at_1000
value: 15.927
- type: map_at_3
value: 7.317
- type: map_at_5
value: 8.726
- type: ndcg_at_1
value: 36.5
- type: ndcg_at_10
value: 26.098
- type: ndcg_at_100
value: 29.215999999999998
- type: ndcg_at_1000
value: 36.254999999999995
- type: ndcg_at_3
value: 29.247
- type: ndcg_at_5
value: 27.692
- type: precision_at_1
value: 47.25
- type: precision_at_10
value: 22.625
- type: precision_at_100
value: 7.042
- type: precision_at_1000
value: 1.6129999999999998
- type: precision_at_3
value: 34.083000000000006
- type: precision_at_5
value: 29.5
- type: recall_at_1
value: 4.672
- type: recall_at_10
value: 15.638
- type: recall_at_100
value: 36.228
- type: recall_at_1000
value: 58.831
- type: recall_at_3
value: 8.578
- type: recall_at_5
value: 11.18
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 829147f8f75a25f005913200eb5ed41fae320aa1
metrics:
- type: accuracy
value: 49.919999999999995
- type: f1
value: 45.37973678791632
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: 1429cf27e393599b8b359b9b72c666f96b2525f9
metrics:
- type: map_at_1
value: 25.801000000000002
- type: map_at_10
value: 33.941
- type: map_at_100
value: 34.73
- type: map_at_1000
value: 34.793
- type: map_at_3
value: 31.705
- type: map_at_5
value: 33.047
- type: ndcg_at_1
value: 27.933000000000003
- type: ndcg_at_10
value: 38.644
- type: ndcg_at_100
value: 42.594
- type: ndcg_at_1000
value: 44.352000000000004
- type: ndcg_at_3
value: 34.199
- type: ndcg_at_5
value: 36.573
- type: precision_at_1
value: 27.933000000000003
- type: precision_at_10
value: 5.603000000000001
- type: precision_at_100
value: 0.773
- type: precision_at_1000
value: 0.094
- type: precision_at_3
value: 14.171
- type: precision_at_5
value: 9.786999999999999
- type: recall_at_1
value: 25.801000000000002
- type: recall_at_10
value: 50.876
- type: recall_at_100
value: 69.253
- type: recall_at_1000
value: 82.907
- type: recall_at_3
value: 38.879000000000005
- type: recall_at_5
value: 44.651999999999994
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: 41b686a7f28c59bcaaa5791efd47c67c8ebe28be
metrics:
- type: map_at_1
value: 9.142
- type: map_at_10
value: 13.841999999999999
- type: map_at_100
value: 14.960999999999999
- type: map_at_1000
value: 15.187000000000001
- type: map_at_3
value: 11.966000000000001
- type: map_at_5
value: 12.921
- type: ndcg_at_1
value: 18.364
- type: ndcg_at_10
value: 18.590999999999998
- type: ndcg_at_100
value: 24.153
- type: ndcg_at_1000
value: 29.104000000000003
- type: ndcg_at_3
value: 16.323
- type: ndcg_at_5
value: 17.000999999999998
- type: precision_at_1
value: 18.364
- type: precision_at_10
value: 5.216
- type: precision_at_100
value: 1.09
- type: precision_at_1000
value: 0.193
- type: precision_at_3
value: 10.751
- type: precision_at_5
value: 7.932
- type: recall_at_1
value: 9.142
- type: recall_at_10
value: 22.747
- type: recall_at_100
value: 44.585
- type: recall_at_1000
value: 75.481
- type: recall_at_3
value: 14.602
- type: recall_at_5
value: 17.957
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: 766870b35a1b9ca65e67a0d1913899973551fc6c
metrics:
- type: map_at_1
value: 18.677
- type: map_at_10
value: 26.616
- type: map_at_100
value: 27.605
- type: map_at_1000
value: 27.711999999999996
- type: map_at_3
value: 24.396
- type: map_at_5
value: 25.627
- type: ndcg_at_1
value: 37.352999999999994
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 38.423
- type: ndcg_at_1000
value: 40.947
- type: ndcg_at_3
value: 29.885
- type: ndcg_at_5
value: 31.874999999999996
- type: precision_at_1
value: 37.352999999999994
- type: precision_at_10
value: 7.539999999999999
- type: precision_at_100
value: 1.107
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.938
- type: precision_at_5
value: 12.943
- type: recall_at_1
value: 18.677
- type: recall_at_10
value: 37.698
- type: recall_at_100
value: 55.354000000000006
- type: recall_at_1000
value: 72.255
- type: recall_at_3
value: 28.406
- type: recall_at_5
value: 32.357
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 8d743909f834c38949e8323a8a6ce8721ea6c7f4
metrics:
- type: accuracy
value: 74.3292
- type: ap
value: 68.30186110189658
- type: f1
value: 74.20709636944783
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: validation
revision: e6838a846e2408f22cf5cc337ebc83e0bcf77849
metrics:
- type: map_at_1
value: 6.889000000000001
- type: map_at_10
value: 12.321
- type: map_at_100
value: 13.416
- type: map_at_1000
value: 13.525
- type: map_at_3
value: 10.205
- type: map_at_5
value: 11.342
- type: ndcg_at_1
value: 7.092
- type: ndcg_at_10
value: 15.827
- type: ndcg_at_100
value: 21.72
- type: ndcg_at_1000
value: 24.836
- type: ndcg_at_3
value: 11.393
- type: ndcg_at_5
value: 13.462
- type: precision_at_1
value: 7.092
- type: precision_at_10
value: 2.7969999999999997
- type: precision_at_100
value: 0.583
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_3
value: 5.019
- type: precision_at_5
value: 4.06
- type: recall_at_1
value: 6.889000000000001
- type: recall_at_10
value: 26.791999999999998
- type: recall_at_100
value: 55.371
- type: recall_at_1000
value: 80.12899999999999
- type: recall_at_3
value: 14.573
- type: recall_at_5
value: 19.557
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 89.6374829001368
- type: f1
value: 89.20878379358307
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 84.54212454212454
- type: f1
value: 82.81080100037023
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 86.46430953969313
- type: f1
value: 86.00019824223267
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 81.31850923896022
- type: f1
value: 81.07860454762863
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 58.23234134098243
- type: f1
value: 56.63845098081841
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: a7e2a951126a26fc8c6a69f835f33a346ba259e3
metrics:
- type: accuracy
value: 72.28571428571429
- type: f1
value: 70.95796714592039
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 70.68171454628363
- type: f1
value: 52.57188062729139
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 60.521273598196665
- type: f1
value: 42.70492970339204
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 64.32288192128087
- type: f1
value: 45.97360620220273
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 58.67209520826808
- type: f1
value: 42.82844991304579
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 41.95769092864826
- type: f1
value: 28.914127631431263
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: 6299947a7777084cc2d4b64235bf7190381ce755
metrics:
- type: accuracy
value: 55.28390596745027
- type: f1
value: 38.33899250561289
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 072a486a144adf7f4479a4a0dddb2152e161e1ea
metrics:
- type: accuracy
value: 70.00336247478144
- type: f1
value: 68.72041942191649
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.0268997982515
- type: f1
value: 75.29844481506652
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: dcefc037ef84348e49b0d29109e891c01067226b
metrics:
- type: v_measure
value: 30.327566856300813
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 3cd0e71dfbe09d4de0f9e5ecba43e7ce280959dc
metrics:
- type: v_measure
value: 28.01650210863619
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.11041256752524
- type: mrr
value: 32.14172939750204
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: 7eb63cc0c1eb59324d709ebed25fcab851fa7610
metrics:
- type: map_at_1
value: 3.527
- type: map_at_10
value: 9.283
- type: map_at_100
value: 11.995000000000001
- type: map_at_1000
value: 13.33
- type: map_at_3
value: 6.223
- type: map_at_5
value: 7.68
- type: ndcg_at_1
value: 36.223
- type: ndcg_at_10
value: 28.255999999999997
- type: ndcg_at_100
value: 26.355
- type: ndcg_at_1000
value: 35.536
- type: ndcg_at_3
value: 31.962000000000003
- type: ndcg_at_5
value: 30.61
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 21.889
- type: precision_at_100
value: 7.1080000000000005
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 30.857
- type: precision_at_5
value: 27.307
- type: recall_at_1
value: 3.527
- type: recall_at_10
value: 14.015
- type: recall_at_100
value: 28.402
- type: recall_at_1000
value: 59.795
- type: recall_at_3
value: 7.5969999999999995
- type: recall_at_5
value: 10.641
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: 6062aefc120bfe8ece5897809fb2e53bfe0d128c
metrics:
- type: map_at_1
value: 11.631
- type: map_at_10
value: 19.532
- type: map_at_100
value: 20.821
- type: map_at_1000
value: 20.910999999999998
- type: map_at_3
value: 16.597
- type: map_at_5
value: 18.197
- type: ndcg_at_1
value: 13.413
- type: ndcg_at_10
value: 24.628
- type: ndcg_at_100
value: 30.883
- type: ndcg_at_1000
value: 33.216
- type: ndcg_at_3
value: 18.697
- type: ndcg_at_5
value: 21.501
- type: precision_at_1
value: 13.413
- type: precision_at_10
value: 4.571
- type: precision_at_100
value: 0.812
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 8.845
- type: precision_at_5
value: 6.889000000000001
- type: recall_at_1
value: 11.631
- type: recall_at_10
value: 38.429
- type: recall_at_100
value: 67.009
- type: recall_at_1000
value: 84.796
- type: recall_at_3
value: 22.74
- type: recall_at_5
value: 29.266
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: 6205996560df11e3a3da9ab4f926788fc30a7db4
metrics:
- type: map_at_1
value: 66.64
- type: map_at_10
value: 80.394
- type: map_at_100
value: 81.099
- type: map_at_1000
value: 81.122
- type: map_at_3
value: 77.289
- type: map_at_5
value: 79.25999999999999
- type: ndcg_at_1
value: 76.85
- type: ndcg_at_10
value: 84.68
- type: ndcg_at_100
value: 86.311
- type: ndcg_at_1000
value: 86.49900000000001
- type: ndcg_at_3
value: 81.295
- type: ndcg_at_5
value: 83.199
- type: precision_at_1
value: 76.85
- type: precision_at_10
value: 12.928999999999998
- type: precision_at_100
value: 1.51
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 35.557
- type: precision_at_5
value: 23.576
- type: recall_at_1
value: 66.64
- type: recall_at_10
value: 93.059
- type: recall_at_100
value: 98.922
- type: recall_at_1000
value: 99.883
- type: recall_at_3
value: 83.49499999999999
- type: recall_at_5
value: 88.729
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: b2805658ae38990172679479369a78b86de8c390
metrics:
- type: v_measure
value: 42.17131361041068
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 48.01815621479994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: 5c59ef3e437a0a9651c8fe6fde943e7dce59fba5
metrics:
- type: map_at_1
value: 3.198
- type: map_at_10
value: 7.550999999999999
- type: map_at_100
value: 9.232
- type: map_at_1000
value: 9.51
- type: map_at_3
value: 5.2940000000000005
- type: map_at_5
value: 6.343999999999999
- type: ndcg_at_1
value: 15.8
- type: ndcg_at_10
value: 13.553999999999998
- type: ndcg_at_100
value: 20.776
- type: ndcg_at_1000
value: 26.204
- type: ndcg_at_3
value: 12.306000000000001
- type: ndcg_at_5
value: 10.952
- type: precision_at_1
value: 15.8
- type: precision_at_10
value: 7.180000000000001
- type: precision_at_100
value: 1.762
- type: precision_at_1000
value: 0.307
- type: precision_at_3
value: 11.333
- type: precision_at_5
value: 9.62
- type: recall_at_1
value: 3.198
- type: recall_at_10
value: 14.575
- type: recall_at_100
value: 35.758
- type: recall_at_1000
value: 62.317
- type: recall_at_3
value: 6.922000000000001
- type: recall_at_5
value: 9.767000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 84.5217161312271
- type: cos_sim_spearman
value: 79.58562467776268
- type: euclidean_pearson
value: 76.69364353942403
- type: euclidean_spearman
value: 74.68959282070473
- type: manhattan_pearson
value: 76.81159265133732
- type: manhattan_spearman
value: 74.7519444048176
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: fdf84275bb8ce4b49c971d02e84dd1abc677a50f
metrics:
- type: cos_sim_pearson
value: 83.70403706922605
- type: cos_sim_spearman
value: 74.28502198729447
- type: euclidean_pearson
value: 83.32719404608066
- type: euclidean_spearman
value: 75.92189433460788
- type: manhattan_pearson
value: 83.35841543005293
- type: manhattan_spearman
value: 75.94458615451978
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 1591bfcbe8c69d4bf7fe2a16e2451017832cafb9
metrics:
- type: cos_sim_pearson
value: 84.94127878986795
- type: cos_sim_spearman
value: 85.35148434923192
- type: euclidean_pearson
value: 81.71127467071571
- type: euclidean_spearman
value: 82.88240481546771
- type: manhattan_pearson
value: 81.72826221967252
- type: manhattan_spearman
value: 82.90725064625128
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: e2125984e7df8b7871f6ae9949cf6b6795e7c54b
metrics:
- type: cos_sim_pearson
value: 83.1474704168523
- type: cos_sim_spearman
value: 79.20612995350827
- type: euclidean_pearson
value: 78.85993329596555
- type: euclidean_spearman
value: 78.91956572744715
- type: manhattan_pearson
value: 78.89999720522347
- type: manhattan_spearman
value: 78.93956842550107
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: 1cd7298cac12a96a373b6a2f18738bb3e739a9b6
metrics:
- type: cos_sim_pearson
value: 84.81255514055894
- type: cos_sim_spearman
value: 85.5217140762934
- type: euclidean_pearson
value: 82.15024353784499
- type: euclidean_spearman
value: 83.04155334389833
- type: manhattan_pearson
value: 82.18598945053624
- type: manhattan_spearman
value: 83.07248357693301
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 360a0b2dff98700d09e634a01e1cc1624d3e42cd
metrics:
- type: cos_sim_pearson
value: 80.63248465157822
- type: cos_sim_spearman
value: 82.53853238521991
- type: euclidean_pearson
value: 78.33936863828221
- type: euclidean_spearman
value: 79.16305579487414
- type: manhattan_pearson
value: 78.3888359870894
- type: manhattan_spearman
value: 79.18504473136467
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: 9fc37e8c632af1c87a3d23e685d49552a02582a0
metrics:
- type: cos_sim_pearson
value: 90.09066290639687
- type: cos_sim_spearman
value: 90.43893699357069
- type: euclidean_pearson
value: 82.39520777222396
- type: euclidean_spearman
value: 81.23948185395952
- type: manhattan_pearson
value: 82.35529784653383
- type: manhattan_spearman
value: 81.12681522483975
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 2de6ce8c1921b71a755b262c6b57fef195dd7906
metrics:
- type: cos_sim_pearson
value: 63.52752323046846
- type: cos_sim_spearman
value: 63.19719780439462
- type: euclidean_pearson
value: 58.29085490641428
- type: euclidean_spearman
value: 58.975178656335046
- type: manhattan_pearson
value: 58.183542772416985
- type: manhattan_spearman
value: 59.190630462178994
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: 8913289635987208e6e7c72789e4be2fe94b6abd
metrics:
- type: cos_sim_pearson
value: 85.45100366635687
- type: cos_sim_spearman
value: 85.66816193002651
- type: euclidean_pearson
value: 81.87976731329091
- type: euclidean_spearman
value: 82.01382867690964
- type: manhattan_pearson
value: 81.88260155706726
- type: manhattan_spearman
value: 82.05258597906492
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: 56a6d0140cf6356659e2a7c1413286a774468d44
metrics:
- type: map
value: 77.53549990038017
- type: mrr
value: 93.37474163454556
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: a75ae049398addde9b70f6b268875f5cbce99089
metrics:
- type: map_at_1
value: 31.167
- type: map_at_10
value: 40.778
- type: map_at_100
value: 42.063
- type: map_at_1000
value: 42.103
- type: map_at_3
value: 37.12
- type: map_at_5
value: 39.205
- type: ndcg_at_1
value: 33.667
- type: ndcg_at_10
value: 46.662
- type: ndcg_at_100
value: 51.995999999999995
- type: ndcg_at_1000
value: 53.254999999999995
- type: ndcg_at_3
value: 39.397999999999996
- type: ndcg_at_5
value: 42.934
- type: precision_at_1
value: 33.667
- type: precision_at_10
value: 7.1
- type: precision_at_100
value: 0.993
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 16.111
- type: precision_at_5
value: 11.600000000000001
- type: recall_at_1
value: 31.167
- type: recall_at_10
value: 63.744
- type: recall_at_100
value: 87.156
- type: recall_at_1000
value: 97.556
- type: recall_at_3
value: 44.0
- type: recall_at_5
value: 52.556000000000004
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: 5a8256d0dff9c4bd3be3ba3e67e4e70173f802ea
metrics:
- type: cos_sim_accuracy
value: 99.55148514851486
- type: cos_sim_ap
value: 80.535236573428
- type: cos_sim_f1
value: 75.01331912626532
- type: cos_sim_precision
value: 80.27366020524515
- type: cos_sim_recall
value: 70.39999999999999
- type: dot_accuracy
value: 99.04851485148515
- type: dot_ap
value: 28.505358821499726
- type: dot_f1
value: 36.36363636363637
- type: dot_precision
value: 37.160751565762006
- type: dot_recall
value: 35.6
- type: euclidean_accuracy
value: 99.4990099009901
- type: euclidean_ap
value: 74.95819047075476
- type: euclidean_f1
value: 71.15489874110564
- type: euclidean_precision
value: 78.59733978234583
- type: euclidean_recall
value: 65.0
- type: manhattan_accuracy
value: 99.50198019801981
- type: manhattan_ap
value: 75.02070096015086
- type: manhattan_f1
value: 71.20535714285712
- type: manhattan_precision
value: 80.55555555555556
- type: manhattan_recall
value: 63.800000000000004
- type: max_accuracy
value: 99.55148514851486
- type: max_ap
value: 80.535236573428
- type: max_f1
value: 75.01331912626532
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 70a89468f6dccacc6aa2b12a6eac54e74328f235
metrics:
- type: v_measure
value: 54.13314692311623
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: d88009ab563dd0b16cfaf4436abaf97fa3550cf0
metrics:
- type: v_measure
value: 31.115181648287145
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: ef807ea29a75ec4f91b50fd4191cb4ee4589a9f9
metrics:
- type: map
value: 44.771112666694336
- type: mrr
value: 45.30415764790765
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: 8753c2788d36c01fc6f05d03fe3f7268d63f9122
metrics:
- type: cos_sim_pearson
value: 30.849429597669374
- type: cos_sim_spearman
value: 30.384175038360194
- type: dot_pearson
value: 29.030383429536823
- type: dot_spearman
value: 28.03273624951732
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: 2c8041b2c07a79b6f7ba8fe6acc72e5d9f92d217
metrics:
- type: map_at_1
value: 0.19499999999999998
- type: map_at_10
value: 1.0959999999999999
- type: map_at_100
value: 5.726
- type: map_at_1000
value: 13.611999999999998
- type: map_at_3
value: 0.45399999999999996
- type: map_at_5
value: 0.67
- type: ndcg_at_1
value: 71.0
- type: ndcg_at_10
value: 55.352999999999994
- type: ndcg_at_100
value: 40.797
- type: ndcg_at_1000
value: 35.955999999999996
- type: ndcg_at_3
value: 63.263000000000005
- type: ndcg_at_5
value: 60.14000000000001
- type: precision_at_1
value: 78.0
- type: precision_at_10
value: 56.99999999999999
- type: precision_at_100
value: 41.199999999999996
- type: precision_at_1000
value: 16.154
- type: precision_at_3
value: 66.667
- type: precision_at_5
value: 62.8
- type: recall_at_1
value: 0.19499999999999998
- type: recall_at_10
value: 1.3639999999999999
- type: recall_at_100
value: 9.317
- type: recall_at_1000
value: 33.629999999999995
- type: recall_at_3
value: 0.49300000000000005
- type: recall_at_5
value: 0.756
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: 527b7d77e16e343303e68cb6af11d6e18b9f7b3b
metrics:
- type: map_at_1
value: 1.335
- type: map_at_10
value: 6.293
- type: map_at_100
value: 10.928
- type: map_at_1000
value: 12.359
- type: map_at_3
value: 3.472
- type: map_at_5
value: 4.935
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 16.178
- type: ndcg_at_100
value: 28.149
- type: ndcg_at_1000
value: 39.845000000000006
- type: ndcg_at_3
value: 19.171
- type: ndcg_at_5
value: 17.864
- type: precision_at_1
value: 20.408
- type: precision_at_10
value: 14.49
- type: precision_at_100
value: 6.306000000000001
- type: precision_at_1000
value: 1.3860000000000001
- type: precision_at_3
value: 21.088
- type: precision_at_5
value: 18.367
- type: recall_at_1
value: 1.335
- type: recall_at_10
value: 10.825999999999999
- type: recall_at_100
value: 39.251000000000005
- type: recall_at_1000
value: 74.952
- type: recall_at_3
value: 4.9110000000000005
- type: recall_at_5
value: 7.312
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 69.93339999999999
- type: ap
value: 13.87476602492533
- type: f1
value: 53.867357615848555
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: 62146448f05be9e52a36b8ee9936447ea787eede
metrics:
- type: accuracy
value: 62.43916242218449
- type: f1
value: 62.870386304954685
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 091a54f9a36281ce7d6590ec8c75dd485e7e01d4
metrics:
- type: v_measure
value: 37.202082549859796
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 83.65023544137807
- type: cos_sim_ap
value: 65.99787692764193
- type: cos_sim_f1
value: 62.10650887573965
- type: cos_sim_precision
value: 56.30901287553648
- type: cos_sim_recall
value: 69.23482849604221
- type: dot_accuracy
value: 79.10830303391549
- type: dot_ap
value: 48.80109642320246
- type: dot_f1
value: 51.418744625967314
- type: dot_precision
value: 40.30253107683091
- type: dot_recall
value: 71.00263852242745
- type: euclidean_accuracy
value: 82.45812719794957
- type: euclidean_ap
value: 60.09969493259607
- type: euclidean_f1
value: 57.658573789246226
- type: euclidean_precision
value: 55.62913907284768
- type: euclidean_recall
value: 59.84168865435356
- type: manhattan_accuracy
value: 82.46408773916671
- type: manhattan_ap
value: 60.116199786815116
- type: manhattan_f1
value: 57.683903860160235
- type: manhattan_precision
value: 53.41726618705036
- type: manhattan_recall
value: 62.69129287598945
- type: max_accuracy
value: 83.65023544137807
- type: max_ap
value: 65.99787692764193
- type: max_f1
value: 62.10650887573965
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34943920518494
- type: cos_sim_ap
value: 84.5428891020442
- type: cos_sim_f1
value: 77.09709933923172
- type: cos_sim_precision
value: 74.83150952967607
- type: cos_sim_recall
value: 79.50415768401602
- type: dot_accuracy
value: 84.53448208949432
- type: dot_ap
value: 73.96328242371995
- type: dot_f1
value: 70.00553786515299
- type: dot_precision
value: 63.58777665995976
- type: dot_recall
value: 77.86418232214352
- type: euclidean_accuracy
value: 86.87662514068381
- type: euclidean_ap
value: 81.45499631520235
- type: euclidean_f1
value: 73.46567109816063
- type: euclidean_precision
value: 69.71037533697381
- type: euclidean_recall
value: 77.6485987064983
- type: manhattan_accuracy
value: 86.88244654014825
- type: manhattan_ap
value: 81.47180273946366
- type: manhattan_f1
value: 73.44624393136418
- type: manhattan_precision
value: 70.80385852090032
- type: manhattan_recall
value: 76.29350169387126
- type: max_accuracy
value: 88.34943920518494
- type: max_ap
value: 84.5428891020442
- type: max_f1
value: 77.09709933923172
---
# SGPT-5.8B-weightedmean-msmarco-specb-bitfit
## Usage
For usage instructions, refer to our codebase: https://github.com/Muennighoff/sgpt
## Evaluation Results
For eval results, refer to our paper: https://arxiv.org/abs/2202.08904
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
```
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 5e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
```bibtex
@article{muennighoff2022sgpt,
title={SGPT: GPT Sentence Embeddings for Semantic Search},
author={Muennighoff, Niklas},
journal={arXiv preprint arXiv:2202.08904},
year={2022}
}
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Omartificial-Intelligence-Space/Arabic-mpnet-base-all-nli-triplet | Omartificial-Intelligence-Space | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:tomaarsen/mpnet-base-all-nli-triplet",
"base_model:finetune:tomaarsen/mpnet-base-all-nli-triplet",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-15T22:01:53 | 2025-01-23T10:32:36 | 50 | 10 | ---
base_model: tomaarsen/mpnet-base-all-nli-triplet
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط
النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث
إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء
مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة
شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
model-index:
- name: SentenceTransformer based on tomaarsen/mpnet-base-all-nli-triplet
results:
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrieval (ar)
type: miracl/mmteb-miracl
config: ar
split: dev
revision: main
metrics:
- type: ndcg_at_1
value: 1.934
- type: ndcg_at_3
value: 2.461
- type: ndcg_at_5
value: 2.907
- type: ndcg_at_10
value: 3.581
- type: ndcg_at_20
value: 4.041
- type: ndcg_at_100
value: 5.669
- type: ndcg_at_1000
value: 8.247
- type: map_at_1
value: 1.298
- type: map_at_3
value: 1.974
- type: map_at_5
value: 2.236
- type: map_at_10
value: 2.503
- type: map_at_20
value: 2.6310000000000002
- type: map_at_100
value: 2.8529999999999998
- type: map_at_1000
value: 2.939
- type: recall_at_1
value: 1.298
- type: recall_at_3
value: 2.785
- type: recall_at_5
value: 3.878
- type: recall_at_10
value: 5.738
- type: recall_at_20
value: 7.2940000000000005
- type: recall_at_100
value: 14.999
- type: recall_at_1000
value: 33.268
- type: precision_at_1
value: 1.934
- type: precision_at_3
value: 1.485
- type: precision_at_5
value: 1.222
- type: precision_at_10
value: 0.9249999999999999
- type: precision_at_20
value: 0.608
- type: precision_at_100
value: 0.263
- type: precision_at_1000
value: 0.061
- type: mrr_at_1
value: 1.9337
- type: mrr_at_3
value: 2.9236
- type: mrr_at_5
value: 3.2361
- type: mrr_at_10
value: 3.5991000000000004
- type: mrr_at_20
value: 3.7424
- type: mrr_at_100
value: 3.9737
- type: mrr_at_1000
value: 4.0521
- type: nauc_ndcg_at_1_max
value: 18.7293
- type: nauc_ndcg_at_1_std
value: -22.227
- type: nauc_ndcg_at_1_diff1
value: 53.751099999999994
- type: nauc_ndcg_at_3_max
value: 13.960700000000001
- type: nauc_ndcg_at_3_std
value: -19.653100000000002
- type: nauc_ndcg_at_3_diff1
value: 39.860800000000005
- type: nauc_ndcg_at_5_max
value: 12.2772
- type: nauc_ndcg_at_5_std
value: -19.7249
- type: nauc_ndcg_at_5_diff1
value: 35.011199999999995
- type: nauc_ndcg_at_10_max
value: 9.7866
- type: nauc_ndcg_at_10_std
value: -19.2077
- type: nauc_ndcg_at_10_diff1
value: 29.893900000000002
- type: nauc_ndcg_at_20_max
value: 8.677700000000002
- type: nauc_ndcg_at_20_std
value: -18.2092
- type: nauc_ndcg_at_20_diff1
value: 27.149800000000003
- type: nauc_ndcg_at_100_max
value: 8.693900000000001
- type: nauc_ndcg_at_100_std
value: -15.490100000000002
- type: nauc_ndcg_at_100_diff1
value: 22.0869
- type: nauc_ndcg_at_1000_max
value: 8.8565
- type: nauc_ndcg_at_1000_std
value: -14.285200000000001
- type: nauc_ndcg_at_1000_diff1
value: 19.5158
- type: nauc_map_at_1_max
value: 18.909100000000002
- type: nauc_map_at_1_std
value: -24.4301
- type: nauc_map_at_1_diff1
value: 60.7617
- type: nauc_map_at_3_max
value: 14.1068
- type: nauc_map_at_3_std
value: -21.1018
- type: nauc_map_at_3_diff1
value: 43.9158
- type: nauc_map_at_5_max
value: 13.1835
- type: nauc_map_at_5_std
value: -20.8493
- type: nauc_map_at_5_diff1
value: 39.895399999999995
- type: nauc_map_at_10_max
value: 11.8414
- type: nauc_map_at_10_std
value: -20.279
- type: nauc_map_at_10_diff1
value: 36.4339
- type: nauc_map_at_20_max
value: 11.1734
- type: nauc_map_at_20_std
value: -19.801299999999998
- type: nauc_map_at_20_diff1
value: 34.8787
- type: nauc_map_at_100_max
value: 11.018
- type: nauc_map_at_100_std
value: -19.1222
- type: nauc_map_at_100_diff1
value: 33.216699999999996
- type: nauc_map_at_1000_max
value: 11.120199999999999
- type: nauc_map_at_1000_std
value: -18.8841
- type: nauc_map_at_1000_diff1
value: 32.8634
- type: nauc_recall_at_1_max
value: 18.909100000000002
- type: nauc_recall_at_1_std
value: -24.4301
- type: nauc_recall_at_1_diff1
value: 60.7617
- type: nauc_recall_at_3_max
value: 11.9728
- type: nauc_recall_at_3_std
value: -18.6359
- type: nauc_recall_at_3_diff1
value: 35.7044
- type: nauc_recall_at_5_max
value: 9.5557
- type: nauc_recall_at_5_std
value: -18.8616
- type: nauc_recall_at_5_diff1
value: 27.9593
- type: nauc_recall_at_10_max
value: 5.581300000000001
- type: nauc_recall_at_10_std
value: -18.3274
- type: nauc_recall_at_10_diff1
value: 21.3123
- type: nauc_recall_at_20_max
value: 4.2211
- type: nauc_recall_at_20_std
value: -16.7507
- type: nauc_recall_at_20_diff1
value: 17.9617
- type: nauc_recall_at_100_max
value: 5.5294
- type: nauc_recall_at_100_std
value: -11.9885
- type: nauc_recall_at_100_diff1
value: 11.269
- type: nauc_recall_at_1000_max
value: 5.6486
- type: nauc_recall_at_1000_std
value: -11.1735
- type: nauc_recall_at_1000_diff1
value: 9.0209
- type: nauc_precision_at_1_max
value: 18.7293
- type: nauc_precision_at_1_std
value: -22.227
- type: nauc_precision_at_1_diff1
value: 53.751099999999994
- type: nauc_precision_at_3_max
value: 13.1207
- type: nauc_precision_at_3_std
value: -17.6116
- type: nauc_precision_at_3_diff1
value: 32.0242
- type: nauc_precision_at_5_max
value: 12.2403
- type: nauc_precision_at_5_std
value: -16.9403
- type: nauc_precision_at_5_diff1
value: 26.3656
- type: nauc_precision_at_10_max
value: 9.5427
- type: nauc_precision_at_10_std
value: -16.5917
- type: nauc_precision_at_10_diff1
value: 21.297
- type: nauc_precision_at_20_max
value: 8.2911
- type: nauc_precision_at_20_std
value: -14.3532
- type: nauc_precision_at_20_diff1
value: 17.999599999999997
- type: nauc_precision_at_100_max
value: 10.3474
- type: nauc_precision_at_100_std
value: -7.6601
- type: nauc_precision_at_100_diff1
value: 12.3374
- type: nauc_precision_at_1000_max
value: 10.9218
- type: nauc_precision_at_1000_std
value: -4.5216
- type: nauc_precision_at_1000_diff1
value: 8.4976
- type: nauc_mrr_at_1_max
value: 18.7293
- type: nauc_mrr_at_1_std
value: -22.227
- type: nauc_mrr_at_1_diff1
value: 53.751099999999994
- type: nauc_mrr_at_3_max
value: 14.973700000000001
- type: nauc_mrr_at_3_std
value: -19.781000000000002
- type: nauc_mrr_at_3_diff1
value: 39.7143
- type: nauc_mrr_at_5_max
value: 14.2562
- type: nauc_mrr_at_5_std
value: -19.3477
- type: nauc_mrr_at_5_diff1
value: 37.0654
- type: nauc_mrr_at_10_max
value: 12.6741
- type: nauc_mrr_at_10_std
value: -19.4737
- type: nauc_mrr_at_10_diff1
value: 34.4683
- type: nauc_mrr_at_20_max
value: 12.1728
- type: nauc_mrr_at_20_std
value: -19.186500000000002
- type: nauc_mrr_at_20_diff1
value: 33.287299999999995
- type: nauc_mrr_at_100_max
value: 11.9865
- type: nauc_mrr_at_100_std
value: -18.7337
- type: nauc_mrr_at_100_diff1
value: 32.0965
- type: nauc_mrr_at_1000_max
value: 11.9275
- type: nauc_mrr_at_1000_std
value: -18.6911
- type: nauc_mrr_at_1000_diff1
value: 31.8893
- type: main_score
value: 3.581
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: mteb/miracl-hard-negatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: ndcg_at_1
value: 3.2
- type: ndcg_at_3
value: 4.223
- type: ndcg_at_5
value: 4.941
- type: ndcg_at_10
value: 6.198
- type: ndcg_at_20
value: 7.405
- type: ndcg_at_100
value: 10.586
- type: ndcg_at_1000
value: 14.695
- type: map_at_1
value: 2.083
- type: map_at_3
value: 3.382
- type: map_at_5
value: 3.805
- type: map_at_10
value: 4.314
- type: map_at_20
value: 4.662
- type: map_at_100
value: 5.133
- type: map_at_1000
value: 5.288
- type: recall_at_1
value: 2.083
- type: recall_at_3
value: 4.941
- type: recall_at_5
value: 6.641
- type: recall_at_10
value: 9.998
- type: recall_at_20
value: 13.971
- type: recall_at_100
value: 28.610000000000003
- type: recall_at_1000
value: 56.98800000000001
- type: precision_at_1
value: 3.2
- type: precision_at_3
value: 2.4330000000000003
- type: precision_at_5
value: 2.02
- type: precision_at_10
value: 1.63
- type: precision_at_20
value: 1.23
- type: precision_at_100
value: 0.538
- type: precision_at_1000
value: 0.11
- type: mrr_at_1
value: 3.2
- type: mrr_at_3
value: 4.9167000000000005
- type: mrr_at_5
value: 5.4817
- type: mrr_at_10
value: 6.1372
- type: mrr_at_20
value: 6.4818
- type: mrr_at_100
value: 6.9077
- type: mrr_at_1000
value: 7.017900000000001
- type: nauc_ndcg_at_1_max
value: 7.5344999999999995
- type: nauc_ndcg_at_1_std
value: -17.3808
- type: nauc_ndcg_at_1_diff1
value: 23.0707
- type: nauc_ndcg_at_3_max
value: 9.2206
- type: nauc_ndcg_at_3_std
value: -12.559400000000002
- type: nauc_ndcg_at_3_diff1
value: 16.543
- type: nauc_ndcg_at_5_max
value: 7.2911
- type: nauc_ndcg_at_5_std
value: -13.4758
- type: nauc_ndcg_at_5_diff1
value: 15.2764
- type: nauc_ndcg_at_10_max
value: 5.4578
- type: nauc_ndcg_at_10_std
value: -14.1635
- type: nauc_ndcg_at_10_diff1
value: 13.047900000000002
- type: nauc_ndcg_at_20_max
value: 7.0633
- type: nauc_ndcg_at_20_std
value: -12.3854
- type: nauc_ndcg_at_20_diff1
value: 11.6855
- type: nauc_ndcg_at_100_max
value: 10.4362
- type: nauc_ndcg_at_100_std
value: -9.9392
- type: nauc_ndcg_at_100_diff1
value: 11.9351
- type: nauc_ndcg_at_1000_max
value: 11.5675
- type: nauc_ndcg_at_1000_std
value: -8.5511
- type: nauc_ndcg_at_1000_diff1
value: 12.418
- type: nauc_map_at_1_max
value: 8.729199999999999
- type: nauc_map_at_1_std
value: -22.5749
- type: nauc_map_at_1_diff1
value: 24.7528
- type: nauc_map_at_3_max
value: 8.6757
- type: nauc_map_at_3_std
value: -14.871899999999998
- type: nauc_map_at_3_diff1
value: 17.5986
- type: nauc_map_at_5_max
value: 7.725999999999999
- type: nauc_map_at_5_std
value: -14.5548
- type: nauc_map_at_5_diff1
value: 16.54
- type: nauc_map_at_10_max
value: 6.399000000000001
- type: nauc_map_at_10_std
value: -14.7618
- type: nauc_map_at_10_diff1
value: 14.735500000000002
- type: nauc_map_at_20_max
value: 6.9674
- type: nauc_map_at_20_std
value: -14.211099999999998
- type: nauc_map_at_20_diff1
value: 14.294599999999999
- type: nauc_map_at_100_max
value: 8.024000000000001
- type: nauc_map_at_100_std
value: -13.2243
- type: nauc_map_at_100_diff1
value: 14.1314
- type: nauc_map_at_1000_max
value: 8.1127
- type: nauc_map_at_1000_std
value: -13.014500000000002
- type: nauc_map_at_1000_diff1
value: 14.1036
- type: nauc_recall_at_1_max
value: 8.729199999999999
- type: nauc_recall_at_1_std
value: -22.5749
- type: nauc_recall_at_1_diff1
value: 24.7528
- type: nauc_recall_at_3_max
value: 9.558800000000002
- type: nauc_recall_at_3_std
value: -10.4583
- type: nauc_recall_at_3_diff1
value: 14.2197
- type: nauc_recall_at_5_max
value: 6.5597
- type: nauc_recall_at_5_std
value: -12.167200000000001
- type: nauc_recall_at_5_diff1
value: 13.283900000000001
- type: nauc_recall_at_10_max
value: 2.7824
- type: nauc_recall_at_10_std
value: -13.879800000000001
- type: nauc_recall_at_10_diff1
value: 9.4774
- type: nauc_recall_at_20_max
value: 5.9161
- type: nauc_recall_at_20_std
value: -10.937
- type: nauc_recall_at_20_diff1
value: 7.096900000000001
- type: nauc_recall_at_100_max
value: 12.2712
- type: nauc_recall_at_100_std
value: -7.2211
- type: nauc_recall_at_100_diff1
value: 7.9826999999999995
- type: nauc_recall_at_1000_max
value: 16.5037
- type: nauc_recall_at_1000_std
value: -3.8615999999999997
- type: nauc_recall_at_1000_diff1
value: 10.1532
- type: nauc_precision_at_1_max
value: 7.5344999999999995
- type: nauc_precision_at_1_std
value: -17.3808
- type: nauc_precision_at_1_diff1
value: 23.0707
- type: nauc_precision_at_3_max
value: 8.8492
- type: nauc_precision_at_3_std
value: -11.2959
- type: nauc_precision_at_3_diff1
value: 14.475999999999999
- type: nauc_precision_at_5_max
value: 6.7330000000000005
- type: nauc_precision_at_5_std
value: -11.0518
- type: nauc_precision_at_5_diff1
value: 11.148
- type: nauc_precision_at_10_max
value: 5.7345
- type: nauc_precision_at_10_std
value: -11.168899999999999
- type: nauc_precision_at_10_diff1
value: 10.2786
- type: nauc_precision_at_20_max
value: 10.4611
- type: nauc_precision_at_20_std
value: -5.3885000000000005
- type: nauc_precision_at_20_diff1
value: 9.0225
- type: nauc_precision_at_100_max
value: 16.0671
- type: nauc_precision_at_100_std
value: -0.5837
- type: nauc_precision_at_100_diff1
value: 12.506300000000001
- type: nauc_precision_at_1000_max
value: 13.394
- type: nauc_precision_at_1000_std
value: 2.2683
- type: nauc_precision_at_1000_diff1
value: 10.2308
- type: nauc_mrr_at_1_max
value: 7.5344999999999995
- type: nauc_mrr_at_1_std
value: -17.3808
- type: nauc_mrr_at_1_diff1
value: 23.0707
- type: nauc_mrr_at_3_max
value: 8.5063
- type: nauc_mrr_at_3_std
value: -13.3302
- type: nauc_mrr_at_3_diff1
value: 17.413999999999998
- type: nauc_mrr_at_5_max
value: 7.4507
- type: nauc_mrr_at_5_std
value: -14.0678
- type: nauc_mrr_at_5_diff1
value: 16.5774
- type: nauc_mrr_at_10_max
value: 7.17
- type: nauc_mrr_at_10_std
value: -14.1629
- type: nauc_mrr_at_10_diff1
value: 16.3169
- type: nauc_mrr_at_20_max
value: 7.558
- type: nauc_mrr_at_20_std
value: -13.3002
- type: nauc_mrr_at_20_diff1
value: 15.335299999999998
- type: nauc_mrr_at_100_max
value: 7.947500000000001
- type: nauc_mrr_at_100_std
value: -12.963099999999999
- type: nauc_mrr_at_100_diff1
value: 15.235399999999998
- type: nauc_mrr_at_1000_max
value: 7.9108
- type: nauc_mrr_at_1000_std
value: -12.954099999999999
- type: nauc_mrr_at_1000_diff1
value: 15.2051
- type: main_score
value: 6.198
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 23.985
- type: ndcg_at_3
value: 31.717000000000002
- type: ndcg_at_5
value: 34.439
- type: ndcg_at_10
value: 36.51
- type: ndcg_at_20
value: 38.442
- type: ndcg_at_100
value: 42.731
- type: ndcg_at_1000
value: 45.137
- type: map_at_1
value: 23.985
- type: map_at_3
value: 29.723
- type: map_at_5
value: 31.241000000000003
- type: map_at_10
value: 32.063
- type: map_at_20
value: 32.607
- type: map_at_100
value: 33.181
- type: map_at_1000
value: 33.278999999999996
- type: recall_at_1
value: 23.985
- type: recall_at_3
value: 37.524
- type: recall_at_5
value: 44.101
- type: recall_at_10
value: 50.67700000000001
- type: recall_at_20
value: 58.221000000000004
- type: recall_at_100
value: 81.625
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 23.985
- type: precision_at_3
value: 12.508
- type: precision_at_5
value: 8.82
- type: precision_at_10
value: 5.0680000000000005
- type: precision_at_20
value: 2.911
- type: precision_at_100
value: 0.8160000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 23.9845
- type: mrr_at_3
value: 29.7228
- type: mrr_at_5
value: 31.2411
- type: mrr_at_10
value: 32.0631
- type: mrr_at_20
value: 32.6073
- type: mrr_at_100
value: 33.1811
- type: mrr_at_1000
value: 33.2789
- type: nauc_ndcg_at_1_max
value: 55.551300000000005
- type: nauc_ndcg_at_1_std
value: 19.2389
- type: nauc_ndcg_at_1_diff1
value: 46.3359
- type: nauc_ndcg_at_3_max
value: 54.64790000000001
- type: nauc_ndcg_at_3_std
value: 20.7714
- type: nauc_ndcg_at_3_diff1
value: 39.2472
- type: nauc_ndcg_at_5_max
value: 52.9641
- type: nauc_ndcg_at_5_std
value: 20.366500000000002
- type: nauc_ndcg_at_5_diff1
value: 38.1887
- type: nauc_ndcg_at_10_max
value: 52.8637
- type: nauc_ndcg_at_10_std
value: 20.069200000000002
- type: nauc_ndcg_at_10_diff1
value: 37.0473
- type: nauc_ndcg_at_20_max
value: 51.578900000000004
- type: nauc_ndcg_at_20_std
value: 19.564500000000002
- type: nauc_ndcg_at_20_diff1
value: 34.5057
- type: nauc_ndcg_at_100_max
value: 52.6159
- type: nauc_ndcg_at_100_std
value: 20.3172
- type: nauc_ndcg_at_100_diff1
value: 35.578199999999995
- type: nauc_ndcg_at_1000_max
value: 53.1581
- type: nauc_ndcg_at_1000_std
value: 20.188
- type: nauc_ndcg_at_1000_diff1
value: 37.285000000000004
- type: nauc_map_at_1_max
value: 55.551300000000005
- type: nauc_map_at_1_std
value: 19.2389
- type: nauc_map_at_1_diff1
value: 46.3359
- type: nauc_map_at_3_max
value: 55.1118
- type: nauc_map_at_3_std
value: 20.3289
- type: nauc_map_at_3_diff1
value: 40.842
- type: nauc_map_at_5_max
value: 54.1547
- type: nauc_map_at_5_std
value: 20.0975
- type: nauc_map_at_5_diff1
value: 40.2913
- type: nauc_map_at_10_max
value: 54.173
- type: nauc_map_at_10_std
value: 20.0246
- type: nauc_map_at_10_diff1
value: 39.8307
- type: nauc_map_at_20_max
value: 53.797799999999995
- type: nauc_map_at_20_std
value: 19.8761
- type: nauc_map_at_20_diff1
value: 39.1152
- type: nauc_map_at_100_max
value: 53.957699999999996
- type: nauc_map_at_100_std
value: 20.0471
- type: nauc_map_at_100_diff1
value: 39.260600000000004
- type: nauc_map_at_1000_max
value: 53.982200000000006
- type: nauc_map_at_1000_std
value: 20.0435
- type: nauc_map_at_1000_diff1
value: 39.334
- type: nauc_recall_at_1_max
value: 55.551300000000005
- type: nauc_recall_at_1_std
value: 19.2389
- type: nauc_recall_at_1_diff1
value: 46.3359
- type: nauc_recall_at_3_max
value: 53.303
- type: nauc_recall_at_3_std
value: 21.9959
- type: nauc_recall_at_3_diff1
value: 34.9686
- type: nauc_recall_at_5_max
value: 49.437599999999996
- type: nauc_recall_at_5_std
value: 21.0745
- type: nauc_recall_at_5_diff1
value: 32.3358
- type: nauc_recall_at_10_max
value: 48.7626
- type: nauc_recall_at_10_std
value: 19.9455
- type: nauc_recall_at_10_diff1
value: 28.7268
- type: nauc_recall_at_20_max
value: 43.4219
- type: nauc_recall_at_20_std
value: 17.959600000000002
- type: nauc_recall_at_20_diff1
value: 17.9683
- type: nauc_recall_at_100_max
value: 46.079
- type: nauc_recall_at_100_std
value: 22.0524
- type: nauc_recall_at_100_diff1
value: 14.742099999999999
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.551300000000005
- type: nauc_precision_at_1_std
value: 19.2389
- type: nauc_precision_at_1_diff1
value: 46.3359
- type: nauc_precision_at_3_max
value: 53.303
- type: nauc_precision_at_3_std
value: 21.9959
- type: nauc_precision_at_3_diff1
value: 34.9686
- type: nauc_precision_at_5_max
value: 49.437599999999996
- type: nauc_precision_at_5_std
value: 21.0745
- type: nauc_precision_at_5_diff1
value: 32.3358
- type: nauc_precision_at_10_max
value: 48.7626
- type: nauc_precision_at_10_std
value: 19.9455
- type: nauc_precision_at_10_diff1
value: 28.7268
- type: nauc_precision_at_20_max
value: 43.4219
- type: nauc_precision_at_20_std
value: 17.959600000000002
- type: nauc_precision_at_20_diff1
value: 17.9683
- type: nauc_precision_at_100_max
value: 46.079
- type: nauc_precision_at_100_std
value: 22.0524
- type: nauc_precision_at_100_diff1
value: 14.742099999999999
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 55.551300000000005
- type: nauc_mrr_at_1_std
value: 19.2389
- type: nauc_mrr_at_1_diff1
value: 46.3359
- type: nauc_mrr_at_3_max
value: 55.1118
- type: nauc_mrr_at_3_std
value: 20.3289
- type: nauc_mrr_at_3_diff1
value: 40.842
- type: nauc_mrr_at_5_max
value: 54.1547
- type: nauc_mrr_at_5_std
value: 20.0975
- type: nauc_mrr_at_5_diff1
value: 40.2913
- type: nauc_mrr_at_10_max
value: 54.173
- type: nauc_mrr_at_10_std
value: 20.0246
- type: nauc_mrr_at_10_diff1
value: 39.8307
- type: nauc_mrr_at_20_max
value: 53.797799999999995
- type: nauc_mrr_at_20_std
value: 19.8761
- type: nauc_mrr_at_20_diff1
value: 39.1152
- type: nauc_mrr_at_100_max
value: 53.957699999999996
- type: nauc_mrr_at_100_std
value: 20.0471
- type: nauc_mrr_at_100_diff1
value: 39.260600000000004
- type: nauc_mrr_at_1000_max
value: 53.982200000000006
- type: nauc_mrr_at_1000_std
value: 20.0435
- type: nauc_mrr_at_1000_diff1
value: 39.334
- type: main_score
value: 36.51
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.483
- type: ndcg_at_3
value: 1.9959999999999998
- type: ndcg_at_5
value: 2.391
- type: ndcg_at_10
value: 3.143
- type: ndcg_at_20
value: 5.194
- type: ndcg_at_100
value: 13.254
- type: ndcg_at_1000
value: 18.717
- type: map_at_1
value: 0.483
- type: map_at_3
value: 1.53
- type: map_at_5
value: 1.7469999999999999
- type: map_at_10
value: 2.041
- type: map_at_20
value: 2.5919999999999996
- type: map_at_100
value: 3.5090000000000003
- type: map_at_1000
value: 3.8
- type: recall_at_1
value: 0.483
- type: recall_at_3
value: 3.382
- type: recall_at_5
value: 4.348
- type: recall_at_10
value: 6.763
- type: recall_at_20
value: 14.976
- type: recall_at_100
value: 61.353
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.483
- type: precision_at_3
value: 1.127
- type: precision_at_5
value: 0.8699999999999999
- type: precision_at_10
value: 0.676
- type: precision_at_20
value: 0.749
- type: precision_at_100
value: 0.614
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.48310000000000003
- type: mrr_at_3
value: 1.5298
- type: mrr_at_5
value: 1.7472
- type: mrr_at_10
value: 2.0409
- type: mrr_at_20
value: 2.5922
- type: mrr_at_100
value: 3.5095
- type: mrr_at_1000
value: 3.8004000000000002
- type: nauc_ndcg_at_1_max
value: -41.553000000000004
- type: nauc_ndcg_at_1_std
value: -41.553000000000004
- type: nauc_ndcg_at_1_diff1
value: -57.523500000000006
- type: nauc_ndcg_at_3_max
value: -44.262
- type: nauc_ndcg_at_3_std
value: -41.594300000000004
- type: nauc_ndcg_at_3_diff1
value: -33.6751
- type: nauc_ndcg_at_5_max
value: -42.9736
- type: nauc_ndcg_at_5_std
value: -42.2472
- type: nauc_ndcg_at_5_diff1
value: -33.2173
- type: nauc_ndcg_at_10_max
value: -31.821700000000003
- type: nauc_ndcg_at_10_std
value: -36.0429
- type: nauc_ndcg_at_10_diff1
value: -19.7423
- type: nauc_ndcg_at_20_max
value: -19.906
- type: nauc_ndcg_at_20_std
value: -25.389200000000002
- type: nauc_ndcg_at_20_diff1
value: -12.357899999999999
- type: nauc_ndcg_at_100_max
value: -14.87
- type: nauc_ndcg_at_100_std
value: -15.4838
- type: nauc_ndcg_at_100_diff1
value: -10.3397
- type: nauc_ndcg_at_1000_max
value: -22.5591
- type: nauc_ndcg_at_1000_std
value: -24.8202
- type: nauc_ndcg_at_1000_diff1
value: -15.3685
- type: nauc_map_at_1_max
value: -41.553000000000004
- type: nauc_map_at_1_std
value: -41.553000000000004
- type: nauc_map_at_1_diff1
value: -57.523500000000006
- type: nauc_map_at_3_max
value: -44.3092
- type: nauc_map_at_3_std
value: -41.9893
- type: nauc_map_at_3_diff1
value: -35.857499999999995
- type: nauc_map_at_5_max
value: -43.298500000000004
- type: nauc_map_at_5_std
value: -42.4017
- type: nauc_map_at_5_diff1
value: -35.0605
- type: nauc_map_at_10_max
value: -37.1022
- type: nauc_map_at_10_std
value: -38.9588
- type: nauc_map_at_10_diff1
value: -26.5455
- type: nauc_map_at_20_max
value: -30.0711
- type: nauc_map_at_20_std
value: -33.1179
- type: nauc_map_at_20_diff1
value: -21.5666
- type: nauc_map_at_100_max
value: -27.4023
- type: nauc_map_at_100_std
value: -29.2105
- type: nauc_map_at_100_diff1
value: -19.9454
- type: nauc_map_at_1000_max
value: -28.6252
- type: nauc_map_at_1000_std
value: -30.6047
- type: nauc_map_at_1000_diff1
value: -20.8378
- type: nauc_recall_at_1_max
value: -41.553000000000004
- type: nauc_recall_at_1_std
value: -41.553000000000004
- type: nauc_recall_at_1_diff1
value: -57.523500000000006
- type: nauc_recall_at_3_max
value: -44.1529
- type: nauc_recall_at_3_std
value: -41.004400000000004
- type: nauc_recall_at_3_diff1
value: -30.7575
- type: nauc_recall_at_5_max
value: -42.5017
- type: nauc_recall_at_5_std
value: -42.0639
- type: nauc_recall_at_5_diff1
value: -31.0911
- type: nauc_recall_at_10_max
value: -25.1079
- type: nauc_recall_at_10_std
value: -32.359
- type: nauc_recall_at_10_diff1
value: -11.9862
- type: nauc_recall_at_20_max
value: -11.081199999999999
- type: nauc_recall_at_20_std
value: -18.5217
- type: nauc_recall_at_20_diff1
value: -5.0226
- type: nauc_recall_at_100_max
value: -5.0011
- type: nauc_recall_at_100_std
value: -3.3889000000000005
- type: nauc_recall_at_100_diff1
value: -3.9987000000000004
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -41.553000000000004
- type: nauc_precision_at_1_std
value: -41.553000000000004
- type: nauc_precision_at_1_diff1
value: -57.523500000000006
- type: nauc_precision_at_3_max
value: -44.1529
- type: nauc_precision_at_3_std
value: -41.004400000000004
- type: nauc_precision_at_3_diff1
value: -30.7575
- type: nauc_precision_at_5_max
value: -42.5017
- type: nauc_precision_at_5_std
value: -42.0639
- type: nauc_precision_at_5_diff1
value: -31.0911
- type: nauc_precision_at_10_max
value: -25.1079
- type: nauc_precision_at_10_std
value: -32.359
- type: nauc_precision_at_10_diff1
value: -11.9862
- type: nauc_precision_at_20_max
value: -11.081199999999999
- type: nauc_precision_at_20_std
value: -18.5217
- type: nauc_precision_at_20_diff1
value: -5.0226
- type: nauc_precision_at_100_max
value: -5.0011
- type: nauc_precision_at_100_std
value: -3.3889000000000005
- type: nauc_precision_at_100_diff1
value: -3.9987000000000004
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -41.553000000000004
- type: nauc_mrr_at_1_std
value: -41.553000000000004
- type: nauc_mrr_at_1_diff1
value: -57.523500000000006
- type: nauc_mrr_at_3_max
value: -44.3092
- type: nauc_mrr_at_3_std
value: -41.9893
- type: nauc_mrr_at_3_diff1
value: -35.857499999999995
- type: nauc_mrr_at_5_max
value: -43.298500000000004
- type: nauc_mrr_at_5_std
value: -42.4017
- type: nauc_mrr_at_5_diff1
value: -35.0605
- type: nauc_mrr_at_10_max
value: -37.1022
- type: nauc_mrr_at_10_std
value: -38.9588
- type: nauc_mrr_at_10_diff1
value: -26.5455
- type: nauc_mrr_at_20_max
value: -30.0711
- type: nauc_mrr_at_20_std
value: -33.1179
- type: nauc_mrr_at_20_diff1
value: -21.5666
- type: nauc_mrr_at_100_max
value: -27.4023
- type: nauc_mrr_at_100_std
value: -29.2105
- type: nauc_mrr_at_100_diff1
value: -19.9454
- type: nauc_mrr_at_1000_max
value: -28.6252
- type: nauc_mrr_at_1000_std
value: -30.6047
- type: nauc_mrr_at_1000_diff1
value: -20.8378
- type: main_score
value: 3.143
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.387
- type: ndcg_at_3
value: 0.799
- type: ndcg_at_5
value: 1.107
- type: ndcg_at_10
value: 1.8950000000000002
- type: ndcg_at_20
value: 2.491
- type: ndcg_at_100
value: 6.7250000000000005
- type: ndcg_at_1000
value: 15.473999999999998
- type: map_at_1
value: 0.387
- type: map_at_3
value: 0.677
- type: map_at_5
value: 0.8410000000000001
- type: map_at_10
value: 1.1520000000000001
- type: map_at_20
value: 1.32
- type: map_at_100
value: 1.82
- type: map_at_1000
value: 2.129
- type: recall_at_1
value: 0.387
- type: recall_at_3
value: 1.161
- type: recall_at_5
value: 1.934
- type: recall_at_10
value: 4.449
- type: recall_at_20
value: 6.77
- type: recall_at_100
value: 30.947999999999997
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.387
- type: precision_at_3
value: 0.387
- type: precision_at_5
value: 0.387
- type: precision_at_10
value: 0.445
- type: precision_at_20
value: 0.338
- type: precision_at_100
value: 0.309
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.3868
- type: mrr_at_3
value: 0.677
- type: mrr_at_5
value: 0.8413999999999999
- type: mrr_at_10
value: 1.1516
- type: mrr_at_20
value: 1.3199
- type: mrr_at_100
value: 1.8199
- type: mrr_at_1000
value: 2.1289
- type: nauc_ndcg_at_1_max
value: 46.4561
- type: nauc_ndcg_at_1_std
value: -32.306200000000004
- type: nauc_ndcg_at_1_diff1
value: 4.4164
- type: nauc_ndcg_at_3_max
value: 21.7988
- type: nauc_ndcg_at_3_std
value: 9.9137
- type: nauc_ndcg_at_3_diff1
value: 31.1407
- type: nauc_ndcg_at_5_max
value: 11.1279
- type: nauc_ndcg_at_5_std
value: 11.2983
- type: nauc_ndcg_at_5_diff1
value: 11.506
- type: nauc_ndcg_at_10_max
value: 13.262199999999998
- type: nauc_ndcg_at_10_std
value: 11.3881
- type: nauc_ndcg_at_10_diff1
value: 8.228100000000001
- type: nauc_ndcg_at_20_max
value: 5.5699
- type: nauc_ndcg_at_20_std
value: 9.5456
- type: nauc_ndcg_at_20_diff1
value: 1.0035
- type: nauc_ndcg_at_100_max
value: 12.0172
- type: nauc_ndcg_at_100_std
value: 14.402999999999999
- type: nauc_ndcg_at_100_diff1
value: -3.5281
- type: nauc_ndcg_at_1000_max
value: 10.545
- type: nauc_ndcg_at_1000_std
value: 12.3847
- type: nauc_ndcg_at_1000_diff1
value: -1.6625999999999999
- type: nauc_map_at_1_max
value: 46.4561
- type: nauc_map_at_1_std
value: -32.306200000000004
- type: nauc_map_at_1_diff1
value: 4.4164
- type: nauc_map_at_3_max
value: 24.696299999999997
- type: nauc_map_at_3_std
value: 1.8696000000000002
- type: nauc_map_at_3_diff1
value: 26.0786
- type: nauc_map_at_5_max
value: 16.475
- type: nauc_map_at_5_std
value: 3.9592
- type: nauc_map_at_5_diff1
value: 13.389499999999998
- type: nauc_map_at_10_max
value: 16.2084
- type: nauc_map_at_10_std
value: 5.8298000000000005
- type: nauc_map_at_10_diff1
value: 10.8911
- type: nauc_map_at_20_max
value: 11.9237
- type: nauc_map_at_20_std
value: 5.7805
- type: nauc_map_at_20_diff1
value: 6.8079
- type: nauc_map_at_100_max
value: 12.779399999999999
- type: nauc_map_at_100_std
value: 8.5426
- type: nauc_map_at_100_diff1
value: 3.11
- type: nauc_map_at_1000_max
value: 12.587200000000001
- type: nauc_map_at_1000_std
value: 8.2159
- type: nauc_map_at_1000_diff1
value: 3.3531
- type: nauc_recall_at_1_max
value: 46.4561
- type: nauc_recall_at_1_std
value: -32.306200000000004
- type: nauc_recall_at_1_diff1
value: 4.4164
- type: nauc_recall_at_3_max
value: 17.041600000000003
- type: nauc_recall_at_3_std
value: 23.9913
- type: nauc_recall_at_3_diff1
value: 39.9943
- type: nauc_recall_at_5_max
value: 3.8781000000000003
- type: nauc_recall_at_5_std
value: 21.1723
- type: nauc_recall_at_5_diff1
value: 7.9961
- type: nauc_recall_at_10_max
value: 11.1446
- type: nauc_recall_at_10_std
value: 15.9162
- type: nauc_recall_at_10_diff1
value: 5.334
- type: nauc_recall_at_20_max
value: 0.585
- type: nauc_recall_at_20_std
value: 11.422799999999999
- type: nauc_recall_at_20_diff1
value: -4.172
- type: nauc_recall_at_100_max
value: 13.1038
- type: nauc_recall_at_100_std
value: 16.5849
- type: nauc_recall_at_100_diff1
value: -5.8172
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 46.4561
- type: nauc_precision_at_1_std
value: -32.306200000000004
- type: nauc_precision_at_1_diff1
value: 4.4164
- type: nauc_precision_at_3_max
value: 17.041600000000003
- type: nauc_precision_at_3_std
value: 23.9913
- type: nauc_precision_at_3_diff1
value: 39.9943
- type: nauc_precision_at_5_max
value: 3.8781000000000003
- type: nauc_precision_at_5_std
value: 21.1723
- type: nauc_precision_at_5_diff1
value: 7.9961
- type: nauc_precision_at_10_max
value: 11.1446
- type: nauc_precision_at_10_std
value: 15.9162
- type: nauc_precision_at_10_diff1
value: 5.334
- type: nauc_precision_at_20_max
value: 0.585
- type: nauc_precision_at_20_std
value: 11.422799999999999
- type: nauc_precision_at_20_diff1
value: -4.172
- type: nauc_precision_at_100_max
value: 13.1038
- type: nauc_precision_at_100_std
value: 16.5849
- type: nauc_precision_at_100_diff1
value: -5.8172
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 46.4561
- type: nauc_mrr_at_1_std
value: -32.306200000000004
- type: nauc_mrr_at_1_diff1
value: 4.4164
- type: nauc_mrr_at_3_max
value: 24.696299999999997
- type: nauc_mrr_at_3_std
value: 1.8696000000000002
- type: nauc_mrr_at_3_diff1
value: 26.0786
- type: nauc_mrr_at_5_max
value: 16.475
- type: nauc_mrr_at_5_std
value: 3.9592
- type: nauc_mrr_at_5_diff1
value: 13.389499999999998
- type: nauc_mrr_at_10_max
value: 16.2084
- type: nauc_mrr_at_10_std
value: 5.8298000000000005
- type: nauc_mrr_at_10_diff1
value: 10.8911
- type: nauc_mrr_at_20_max
value: 11.9237
- type: nauc_mrr_at_20_std
value: 5.7805
- type: nauc_mrr_at_20_diff1
value: 6.8079
- type: nauc_mrr_at_100_max
value: 12.779399999999999
- type: nauc_mrr_at_100_std
value: 8.5426
- type: nauc_mrr_at_100_diff1
value: 3.11
- type: nauc_mrr_at_1000_max
value: 12.587200000000001
- type: nauc_mrr_at_1000_std
value: 8.2159
- type: nauc_mrr_at_1000_diff1
value: 3.3531
- type: main_score
value: 1.8950000000000002
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.621
- type: ndcg_at_3
value: 1.9449999999999998
- type: ndcg_at_5
value: 2.7470000000000003
- type: ndcg_at_10
value: 3.936
- type: ndcg_at_20
value: 6.0729999999999995
- type: ndcg_at_100
value: 16.366
- type: ndcg_at_1000
value: 19.769000000000002
- type: map_at_1
value: 0.621
- type: map_at_3
value: 1.553
- type: map_at_5
value: 2.019
- type: map_at_10
value: 2.5
- type: map_at_20
value: 3.055
- type: map_at_100
value: 4.247999999999999
- type: map_at_1000
value: 4.443
- type: recall_at_1
value: 0.621
- type: recall_at_3
value: 3.106
- type: recall_at_5
value: 4.968999999999999
- type: recall_at_10
value: 8.696
- type: recall_at_20
value: 17.391000000000002
- type: recall_at_100
value: 76.398
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.621
- type: precision_at_3
value: 1.035
- type: precision_at_5
value: 0.9939999999999999
- type: precision_at_10
value: 0.8699999999999999
- type: precision_at_20
value: 0.8699999999999999
- type: precision_at_100
value: 0.764
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.6211
- type: mrr_at_3
value: 1.5528
- type: mrr_at_5
value: 2.0185999999999997
- type: mrr_at_10
value: 2.4998
- type: mrr_at_20
value: 3.0547
- type: mrr_at_100
value: 4.2485
- type: mrr_at_1000
value: 4.4432
- type: nauc_ndcg_at_1_max
value: -49.7187
- type: nauc_ndcg_at_1_std
value: -49.7187
- type: nauc_ndcg_at_1_diff1
value: -20.5681
- type: nauc_ndcg_at_3_max
value: -40.8251
- type: nauc_ndcg_at_3_std
value: -30.895400000000002
- type: nauc_ndcg_at_3_diff1
value: -6.4114
- type: nauc_ndcg_at_5_max
value: -28.4846
- type: nauc_ndcg_at_5_std
value: -20.5221
- type: nauc_ndcg_at_5_diff1
value: -0.8007
- type: nauc_ndcg_at_10_max
value: -20.3348
- type: nauc_ndcg_at_10_std
value: -8.2217
- type: nauc_ndcg_at_10_diff1
value: 0.5930000000000001
- type: nauc_ndcg_at_20_max
value: -19.456699999999998
- type: nauc_ndcg_at_20_std
value: -9.5993
- type: nauc_ndcg_at_20_diff1
value: -2.6712
- type: nauc_ndcg_at_100_max
value: -15.7733
- type: nauc_ndcg_at_100_std
value: -5.1976
- type: nauc_ndcg_at_100_diff1
value: 3.029
- type: nauc_ndcg_at_1000_max
value: -21.9004
- type: nauc_ndcg_at_1000_std
value: -11.8486
- type: nauc_ndcg_at_1000_diff1
value: -2.4699
- type: nauc_map_at_1_max
value: -49.7187
- type: nauc_map_at_1_std
value: -49.7187
- type: nauc_map_at_1_diff1
value: -20.5681
- type: nauc_map_at_3_max
value: -42.530499999999996
- type: nauc_map_at_3_std
value: -34.239999999999995
- type: nauc_map_at_3_diff1
value: -8.7485
- type: nauc_map_at_5_max
value: -32.3882
- type: nauc_map_at_5_std
value: -25.2735
- type: nauc_map_at_5_diff1
value: -3.7768
- type: nauc_map_at_10_max
value: -26.5982
- type: nauc_map_at_10_std
value: -16.7374
- type: nauc_map_at_10_diff1
value: -2.3562
- type: nauc_map_at_20_max
value: -25.2884
- type: nauc_map_at_20_std
value: -16.1507
- type: nauc_map_at_20_diff1
value: -3.5117000000000003
- type: nauc_map_at_100_max
value: -24.921499999999998
- type: nauc_map_at_100_std
value: -15.5839
- type: nauc_map_at_100_diff1
value: -3.2183
- type: nauc_map_at_1000_max
value: -25.655499999999996
- type: nauc_map_at_1000_std
value: -16.3961
- type: nauc_map_at_1000_diff1
value: -3.8159
- type: nauc_recall_at_1_max
value: -49.7187
- type: nauc_recall_at_1_std
value: -49.7187
- type: nauc_recall_at_1_diff1
value: -20.5681
- type: nauc_recall_at_3_max
value: -38.1894
- type: nauc_recall_at_3_std
value: -25.753700000000002
- type: nauc_recall_at_3_diff1
value: -2.8386
- type: nauc_recall_at_5_max
value: -23.336000000000002
- type: nauc_recall_at_5_std
value: -14.365400000000001
- type: nauc_recall_at_5_diff1
value: 3.0241000000000002
- type: nauc_recall_at_10_max
value: -13.7581
- type: nauc_recall_at_10_std
value: 0.758
- type: nauc_recall_at_10_diff1
value: 3.3952999999999998
- type: nauc_recall_at_20_max
value: -15.1755
- type: nauc_recall_at_20_std
value: -5.1234
- type: nauc_recall_at_20_diff1
value: -2.7003
- type: nauc_recall_at_100_max
value: -3.2379
- type: nauc_recall_at_100_std
value: 8.405
- type: nauc_recall_at_100_diff1
value: 14.2268
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -49.7187
- type: nauc_precision_at_1_std
value: -49.7187
- type: nauc_precision_at_1_diff1
value: -20.5681
- type: nauc_precision_at_3_max
value: -38.1894
- type: nauc_precision_at_3_std
value: -25.753700000000002
- type: nauc_precision_at_3_diff1
value: -2.8386
- type: nauc_precision_at_5_max
value: -23.336000000000002
- type: nauc_precision_at_5_std
value: -14.365400000000001
- type: nauc_precision_at_5_diff1
value: 3.0241000000000002
- type: nauc_precision_at_10_max
value: -13.7581
- type: nauc_precision_at_10_std
value: 0.758
- type: nauc_precision_at_10_diff1
value: 3.3952999999999998
- type: nauc_precision_at_20_max
value: -15.1755
- type: nauc_precision_at_20_std
value: -5.1234
- type: nauc_precision_at_20_diff1
value: -2.7003
- type: nauc_precision_at_100_max
value: -3.2379
- type: nauc_precision_at_100_std
value: 8.405
- type: nauc_precision_at_100_diff1
value: 14.2268
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -49.7187
- type: nauc_mrr_at_1_std
value: -49.7187
- type: nauc_mrr_at_1_diff1
value: -20.5681
- type: nauc_mrr_at_3_max
value: -42.530499999999996
- type: nauc_mrr_at_3_std
value: -34.239999999999995
- type: nauc_mrr_at_3_diff1
value: -8.7485
- type: nauc_mrr_at_5_max
value: -32.3882
- type: nauc_mrr_at_5_std
value: -25.2735
- type: nauc_mrr_at_5_diff1
value: -3.7768
- type: nauc_mrr_at_10_max
value: -26.5982
- type: nauc_mrr_at_10_std
value: -16.7374
- type: nauc_mrr_at_10_diff1
value: -2.3562
- type: nauc_mrr_at_20_max
value: -25.2884
- type: nauc_mrr_at_20_std
value: -16.1507
- type: nauc_mrr_at_20_diff1
value: -3.5117000000000003
- type: nauc_mrr_at_100_max
value: -24.921499999999998
- type: nauc_mrr_at_100_std
value: -15.5839
- type: nauc_mrr_at_100_diff1
value: -3.2183
- type: nauc_mrr_at_1000_max
value: -25.655499999999996
- type: nauc_mrr_at_1000_std
value: -16.3961
- type: nauc_mrr_at_1000_diff1
value: -3.8159
- type: main_score
value: 3.936
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.075
- type: ndcg_at_3
value: 1.952
- type: ndcg_at_5
value: 2.8080000000000003
- type: ndcg_at_10
value: 3.665
- type: ndcg_at_20
value: 5.686
- type: ndcg_at_100
value: 14.824000000000002
- type: ndcg_at_1000
value: 19.533
- type: map_at_1
value: 1.075
- type: map_at_3
value: 1.703
- type: map_at_5
value: 2.1590000000000003
- type: map_at_10
value: 2.5069999999999997
- type: map_at_20
value: 3.052
- type: map_at_100
value: 4.165
- type: map_at_1000
value: 4.431
- type: recall_at_1
value: 1.075
- type: recall_at_3
value: 2.688
- type: recall_at_5
value: 4.839
- type: recall_at_10
value: 7.527
- type: recall_at_20
value: 15.591
- type: recall_at_100
value: 67.204
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.075
- type: precision_at_3
value: 0.8959999999999999
- type: precision_at_5
value: 0.968
- type: precision_at_10
value: 0.753
- type: precision_at_20
value: 0.7799999999999999
- type: precision_at_100
value: 0.672
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.0753000000000001
- type: mrr_at_3
value: 1.7025
- type: mrr_at_5
value: 2.1595
- type: mrr_at_10
value: 2.5066
- type: mrr_at_20
value: 3.0518
- type: mrr_at_100
value: 4.165
- type: mrr_at_1000
value: 4.4308
- type: nauc_ndcg_at_1_max
value: 21.262700000000002
- type: nauc_ndcg_at_1_std
value: -41.7253
- type: nauc_ndcg_at_1_diff1
value: 21.262700000000002
- type: nauc_ndcg_at_3_max
value: 16.2895
- type: nauc_ndcg_at_3_std
value: -21.9452
- type: nauc_ndcg_at_3_diff1
value: 12.0077
- type: nauc_ndcg_at_5_max
value: 14.027999999999999
- type: nauc_ndcg_at_5_std
value: -5.2867999999999995
- type: nauc_ndcg_at_5_diff1
value: 1.3698
- type: nauc_ndcg_at_10_max
value: 6.0018
- type: nauc_ndcg_at_10_std
value: -9.074
- type: nauc_ndcg_at_10_diff1
value: 1.3088
- type: nauc_ndcg_at_20_max
value: -6.839
- type: nauc_ndcg_at_20_std
value: -17.1404
- type: nauc_ndcg_at_20_diff1
value: -12.3198
- type: nauc_ndcg_at_100_max
value: 2.491
- type: nauc_ndcg_at_100_std
value: -5.4581
- type: nauc_ndcg_at_100_diff1
value: -2.6779
- type: nauc_ndcg_at_1000_max
value: 0.6387999999999999
- type: nauc_ndcg_at_1000_std
value: -12.7081
- type: nauc_ndcg_at_1000_diff1
value: -5.937
- type: nauc_map_at_1_max
value: 21.262700000000002
- type: nauc_map_at_1_std
value: -41.7253
- type: nauc_map_at_1_diff1
value: 21.262700000000002
- type: nauc_map_at_3_max
value: 16.7498
- type: nauc_map_at_3_std
value: -25.7376
- type: nauc_map_at_3_diff1
value: 12.853
- type: nauc_map_at_5_max
value: 14.973
- type: nauc_map_at_5_std
value: -13.637099999999998
- type: nauc_map_at_5_diff1
value: 5.048699999999999
- type: nauc_map_at_10_max
value: 10.3348
- type: nauc_map_at_10_std
value: -14.7688
- type: nauc_map_at_10_diff1
value: 4.5799
- type: nauc_map_at_20_max
value: 2.9443
- type: nauc_map_at_20_std
value: -18.388299999999997
- type: nauc_map_at_20_diff1
value: -2.883
- type: nauc_map_at_100_max
value: 4.2533
- type: nauc_map_at_100_std
value: -15.348700000000001
- type: nauc_map_at_100_diff1
value: -2.0131
- type: nauc_map_at_1000_max
value: 4.2232
- type: nauc_map_at_1000_std
value: -16.1977
- type: nauc_map_at_1000_diff1
value: -2.1845
- type: nauc_recall_at_1_max
value: 21.262700000000002
- type: nauc_recall_at_1_std
value: -41.7253
- type: nauc_recall_at_1_diff1
value: 21.262700000000002
- type: nauc_recall_at_3_max
value: 15.5258
- type: nauc_recall_at_3_std
value: -14.8099
- type: nauc_recall_at_3_diff1
value: 10.6104
- type: nauc_recall_at_5_max
value: 12.767800000000001
- type: nauc_recall_at_5_std
value: 6.8180000000000005
- type: nauc_recall_at_5_diff1
value: -3.8459
- type: nauc_recall_at_10_max
value: 0.5512
- type: nauc_recall_at_10_std
value: -3.2002
- type: nauc_recall_at_10_diff1
value: -2.238
- type: nauc_recall_at_20_max
value: -15.572099999999999
- type: nauc_recall_at_20_std
value: -17.1781
- type: nauc_recall_at_20_diff1
value: -20.64
- type: nauc_recall_at_100_max
value: 5.5887
- type: nauc_recall_at_100_std
value: 6.551
- type: nauc_recall_at_100_diff1
value: 2.6925999999999997
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 21.262700000000002
- type: nauc_precision_at_1_std
value: -41.7253
- type: nauc_precision_at_1_diff1
value: 21.262700000000002
- type: nauc_precision_at_3_max
value: 15.5258
- type: nauc_precision_at_3_std
value: -14.8099
- type: nauc_precision_at_3_diff1
value: 10.6104
- type: nauc_precision_at_5_max
value: 12.767800000000001
- type: nauc_precision_at_5_std
value: 6.8180000000000005
- type: nauc_precision_at_5_diff1
value: -3.8459
- type: nauc_precision_at_10_max
value: 0.5512
- type: nauc_precision_at_10_std
value: -3.2002
- type: nauc_precision_at_10_diff1
value: -2.238
- type: nauc_precision_at_20_max
value: -15.572099999999999
- type: nauc_precision_at_20_std
value: -17.1781
- type: nauc_precision_at_20_diff1
value: -20.64
- type: nauc_precision_at_100_max
value: 5.5887
- type: nauc_precision_at_100_std
value: 6.551
- type: nauc_precision_at_100_diff1
value: 2.6925999999999997
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 21.262700000000002
- type: nauc_mrr_at_1_std
value: -41.7253
- type: nauc_mrr_at_1_diff1
value: 21.262700000000002
- type: nauc_mrr_at_3_max
value: 16.7498
- type: nauc_mrr_at_3_std
value: -25.7376
- type: nauc_mrr_at_3_diff1
value: 12.853
- type: nauc_mrr_at_5_max
value: 14.973
- type: nauc_mrr_at_5_std
value: -13.637099999999998
- type: nauc_mrr_at_5_diff1
value: 5.048699999999999
- type: nauc_mrr_at_10_max
value: 10.3348
- type: nauc_mrr_at_10_std
value: -14.7688
- type: nauc_mrr_at_10_diff1
value: 4.5799
- type: nauc_mrr_at_20_max
value: 2.9443
- type: nauc_mrr_at_20_std
value: -18.388299999999997
- type: nauc_mrr_at_20_diff1
value: -2.883
- type: nauc_mrr_at_100_max
value: 4.2533
- type: nauc_mrr_at_100_std
value: -15.348700000000001
- type: nauc_mrr_at_100_diff1
value: -2.0131
- type: nauc_mrr_at_1000_max
value: 4.2232
- type: nauc_mrr_at_1000_std
value: -16.1977
- type: nauc_mrr_at_1000_diff1
value: -2.1845
- type: main_score
value: 3.665
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.613
- type: ndcg_at_3
value: 1.307
- type: ndcg_at_5
value: 1.307
- type: ndcg_at_10
value: 2.843
- type: ndcg_at_20
value: 5.175
- type: ndcg_at_100
value: 13.927
- type: ndcg_at_1000
value: 18.776
- type: map_at_1
value: 0.613
- type: map_at_3
value: 1.125
- type: map_at_5
value: 1.125
- type: map_at_10
value: 1.729
- type: map_at_20
value: 2.371
- type: map_at_100
value: 3.38
- type: map_at_1000
value: 3.6540000000000004
- type: recall_at_1
value: 0.613
- type: recall_at_3
value: 1.8399999999999999
- type: recall_at_5
value: 1.8399999999999999
- type: recall_at_10
value: 6.748
- type: recall_at_20
value: 15.951
- type: recall_at_100
value: 66.258
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.613
- type: precision_at_3
value: 0.613
- type: precision_at_5
value: 0.368
- type: precision_at_10
value: 0.675
- type: precision_at_20
value: 0.7979999999999999
- type: precision_at_100
value: 0.6629999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.6134999999999999
- type: mrr_at_3
value: 1.1247
- type: mrr_at_5
value: 1.1247
- type: mrr_at_10
value: 1.7287000000000001
- type: mrr_at_20
value: 2.3708
- type: mrr_at_100
value: 3.38
- type: mrr_at_1000
value: 3.6543
- type: nauc_ndcg_at_1_max
value: -6.955400000000001
- type: nauc_ndcg_at_1_std
value: 32.3707
- type: nauc_ndcg_at_1_diff1
value: -31.731199999999998
- type: nauc_ndcg_at_3_max
value: -5.0637
- type: nauc_ndcg_at_3_std
value: -7.6478
- type: nauc_ndcg_at_3_diff1
value: -31.9542
- type: nauc_ndcg_at_5_max
value: -5.0637
- type: nauc_ndcg_at_5_std
value: -7.6478
- type: nauc_ndcg_at_5_diff1
value: -31.9542
- type: nauc_ndcg_at_10_max
value: -5.5409
- type: nauc_ndcg_at_10_std
value: -5.2786
- type: nauc_ndcg_at_10_diff1
value: -14.349300000000001
- type: nauc_ndcg_at_20_max
value: 3.7065
- type: nauc_ndcg_at_20_std
value: -2.9243
- type: nauc_ndcg_at_20_diff1
value: -11.675
- type: nauc_ndcg_at_100_max
value: 5.6824
- type: nauc_ndcg_at_100_std
value: 4.7786
- type: nauc_ndcg_at_100_diff1
value: -15.0033
- type: nauc_ndcg_at_1000_max
value: 2.2786
- type: nauc_ndcg_at_1000_std
value: 1.9116000000000002
- type: nauc_ndcg_at_1000_diff1
value: -14.347299999999999
- type: nauc_map_at_1_max
value: -6.955400000000001
- type: nauc_map_at_1_std
value: 32.3707
- type: nauc_map_at_1_diff1
value: -31.731199999999998
- type: nauc_map_at_3_max
value: -6.5623000000000005
- type: nauc_map_at_3_std
value: -1.4144999999999999
- type: nauc_map_at_3_diff1
value: -32.321299999999994
- type: nauc_map_at_5_max
value: -6.5623000000000005
- type: nauc_map_at_5_std
value: -1.4144999999999999
- type: nauc_map_at_5_diff1
value: -32.321299999999994
- type: nauc_map_at_10_max
value: -5.9183
- type: nauc_map_at_10_std
value: -1.3847
- type: nauc_map_at_10_diff1
value: -21.0487
- type: nauc_map_at_20_max
value: -0.3147
- type: nauc_map_at_20_std
value: -0.8122
- type: nauc_map_at_20_diff1
value: -18.2027
- type: nauc_map_at_100_max
value: 0.5482
- type: nauc_map_at_100_std
value: 2.1596
- type: nauc_map_at_100_diff1
value: -17.8683
- type: nauc_map_at_1000_max
value: -0.0387
- type: nauc_map_at_1000_std
value: 1.7451999999999999
- type: nauc_map_at_1000_diff1
value: -17.9499
- type: nauc_recall_at_1_max
value: -6.955400000000001
- type: nauc_recall_at_1_std
value: 32.3707
- type: nauc_recall_at_1_diff1
value: -31.731199999999998
- type: nauc_recall_at_3_max
value: -2.1052999999999997
- type: nauc_recall_at_3_std
value: -18.885199999999998
- type: nauc_recall_at_3_diff1
value: -31.206699999999998
- type: nauc_recall_at_5_max
value: -2.1052999999999997
- type: nauc_recall_at_5_std
value: -18.885199999999998
- type: nauc_recall_at_5_diff1
value: -31.206699999999998
- type: nauc_recall_at_10_max
value: -5.5279
- type: nauc_recall_at_10_std
value: -8.5135
- type: nauc_recall_at_10_diff1
value: -7.7075000000000005
- type: nauc_recall_at_20_max
value: 6.4999
- type: nauc_recall_at_20_std
value: -3.8489000000000004
- type: nauc_recall_at_20_diff1
value: -7.310999999999999
- type: nauc_recall_at_100_max
value: 9.9534
- type: nauc_recall_at_100_std
value: 8.2841
- type: nauc_recall_at_100_diff1
value: -15.723300000000002
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -6.955400000000001
- type: nauc_precision_at_1_std
value: 32.3707
- type: nauc_precision_at_1_diff1
value: -31.731199999999998
- type: nauc_precision_at_3_max
value: -2.1052999999999997
- type: nauc_precision_at_3_std
value: -18.885199999999998
- type: nauc_precision_at_3_diff1
value: -31.206699999999998
- type: nauc_precision_at_5_max
value: -2.1052999999999997
- type: nauc_precision_at_5_std
value: -18.885199999999998
- type: nauc_precision_at_5_diff1
value: -31.206699999999998
- type: nauc_precision_at_10_max
value: -5.5279
- type: nauc_precision_at_10_std
value: -8.5135
- type: nauc_precision_at_10_diff1
value: -7.7075000000000005
- type: nauc_precision_at_20_max
value: 6.4999
- type: nauc_precision_at_20_std
value: -3.8489000000000004
- type: nauc_precision_at_20_diff1
value: -7.310999999999999
- type: nauc_precision_at_100_max
value: 9.9534
- type: nauc_precision_at_100_std
value: 8.2841
- type: nauc_precision_at_100_diff1
value: -15.723300000000002
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -6.955400000000001
- type: nauc_mrr_at_1_std
value: 32.3707
- type: nauc_mrr_at_1_diff1
value: -31.731199999999998
- type: nauc_mrr_at_3_max
value: -6.5623000000000005
- type: nauc_mrr_at_3_std
value: -1.4144999999999999
- type: nauc_mrr_at_3_diff1
value: -32.321299999999994
- type: nauc_mrr_at_5_max
value: -6.5623000000000005
- type: nauc_mrr_at_5_std
value: -1.4144999999999999
- type: nauc_mrr_at_5_diff1
value: -32.321299999999994
- type: nauc_mrr_at_10_max
value: -5.9183
- type: nauc_mrr_at_10_std
value: -1.3847
- type: nauc_mrr_at_10_diff1
value: -21.0487
- type: nauc_mrr_at_20_max
value: -0.3147
- type: nauc_mrr_at_20_std
value: -0.8122
- type: nauc_mrr_at_20_diff1
value: -18.2027
- type: nauc_mrr_at_100_max
value: 0.5482
- type: nauc_mrr_at_100_std
value: 2.1596
- type: nauc_mrr_at_100_diff1
value: -17.8683
- type: nauc_mrr_at_1000_max
value: -0.0387
- type: nauc_mrr_at_1000_std
value: 1.7451999999999999
- type: nauc_mrr_at_1000_diff1
value: -17.9499
- type: main_score
value: 2.843
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.532
- type: ndcg_at_3
value: 1.133
- type: ndcg_at_5
value: 1.592
- type: ndcg_at_10
value: 3.001
- type: ndcg_at_20
value: 4.599
- type: ndcg_at_100
value: 13.530000000000001
- type: ndcg_at_1000
value: 18.706999999999997
- type: map_at_1
value: 0.532
- type: map_at_3
value: 0.975
- type: map_at_5
value: 1.2409999999999999
- type: map_at_10
value: 1.8419999999999999
- type: map_at_20
value: 2.273
- type: map_at_100
value: 3.3529999999999998
- type: map_at_1000
value: 3.642
- type: recall_at_1
value: 0.532
- type: recall_at_3
value: 1.5959999999999999
- type: recall_at_5
value: 2.6599999999999997
- type: recall_at_10
value: 6.915
- type: recall_at_20
value: 13.297999999999998
- type: recall_at_100
value: 63.83
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.532
- type: precision_at_3
value: 0.532
- type: precision_at_5
value: 0.532
- type: precision_at_10
value: 0.6910000000000001
- type: precision_at_20
value: 0.6649999999999999
- type: precision_at_100
value: 0.638
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.5319
- type: mrr_at_3
value: 0.9752000000000001
- type: mrr_at_5
value: 1.2411
- type: mrr_at_10
value: 1.8416
- type: mrr_at_20
value: 2.2734
- type: mrr_at_100
value: 3.3527
- type: mrr_at_1000
value: 3.6415
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 100.0
- type: nauc_ndcg_at_3_max
value: 43.0668
- type: nauc_ndcg_at_3_std
value: 53.02329999999999
- type: nauc_ndcg_at_3_diff1
value: 42.2661
- type: nauc_ndcg_at_5_max
value: 15.126999999999999
- type: nauc_ndcg_at_5_std
value: 44.332899999999995
- type: nauc_ndcg_at_5_diff1
value: 18.2645
- type: nauc_ndcg_at_10_max
value: 19.707900000000002
- type: nauc_ndcg_at_10_std
value: 24.8599
- type: nauc_ndcg_at_10_diff1
value: 8.5712
- type: nauc_ndcg_at_20_max
value: 18.529999999999998
- type: nauc_ndcg_at_20_std
value: 23.8624
- type: nauc_ndcg_at_20_diff1
value: 3.8219999999999996
- type: nauc_ndcg_at_100_max
value: 13.3018
- type: nauc_ndcg_at_100_std
value: 13.919699999999999
- type: nauc_ndcg_at_100_diff1
value: 5.1807
- type: nauc_ndcg_at_1000_max
value: 15.4975
- type: nauc_ndcg_at_1000_std
value: 19.0027
- type: nauc_ndcg_at_1000_diff1
value: 10.5977
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 100.0
- type: nauc_map_at_3_max
value: 52.9714
- type: nauc_map_at_3_std
value: 62.1425
- type: nauc_map_at_3_diff1
value: 49.1278
- type: nauc_map_at_5_max
value: 30.0502
- type: nauc_map_at_5_std
value: 53.7191
- type: nauc_map_at_5_diff1
value: 29.7903
- type: nauc_map_at_10_max
value: 28.0566
- type: nauc_map_at_10_std
value: 37.3678
- type: nauc_map_at_10_diff1
value: 19.3192
- type: nauc_map_at_20_max
value: 24.929499999999997
- type: nauc_map_at_20_std
value: 34.0077
- type: nauc_map_at_20_diff1
value: 14.304
- type: nauc_map_at_100_max
value: 21.8729
- type: nauc_map_at_100_std
value: 27.860000000000003
- type: nauc_map_at_100_diff1
value: 15.3385
- type: nauc_map_at_1000_max
value: 22.311700000000002
- type: nauc_map_at_1000_std
value: 28.900100000000002
- type: nauc_map_at_1000_diff1
value: 16.1893
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 100.0
- type: nauc_recall_at_3_max
value: 24.990000000000002
- type: nauc_recall_at_3_std
value: 36.1992
- type: nauc_recall_at_3_diff1
value: 30.3501
- type: nauc_recall_at_5_max
value: -6.6037
- type: nauc_recall_at_5_std
value: 30.852899999999998
- type: nauc_recall_at_5_diff1
value: 1.7645000000000002
- type: nauc_recall_at_10_max
value: 13.189899999999998
- type: nauc_recall_at_10_std
value: 13.314699999999998
- type: nauc_recall_at_10_diff1
value: -0.8269000000000001
- type: nauc_recall_at_20_max
value: 15.8802
- type: nauc_recall_at_20_std
value: 17.947499999999998
- type: nauc_recall_at_20_diff1
value: -2.5606
- type: nauc_recall_at_100_max
value: 9.5721
- type: nauc_recall_at_100_std
value: 6.9126
- type: nauc_recall_at_100_diff1
value: -2.2487
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 100.0
- type: nauc_precision_at_3_max
value: 24.990000000000002
- type: nauc_precision_at_3_std
value: 36.1992
- type: nauc_precision_at_3_diff1
value: 30.3501
- type: nauc_precision_at_5_max
value: -6.6037
- type: nauc_precision_at_5_std
value: 30.852899999999998
- type: nauc_precision_at_5_diff1
value: 1.7645000000000002
- type: nauc_precision_at_10_max
value: 13.189899999999998
- type: nauc_precision_at_10_std
value: 13.314699999999998
- type: nauc_precision_at_10_diff1
value: -0.8269000000000001
- type: nauc_precision_at_20_max
value: 15.8802
- type: nauc_precision_at_20_std
value: 17.947499999999998
- type: nauc_precision_at_20_diff1
value: -2.5606
- type: nauc_precision_at_100_max
value: 9.5721
- type: nauc_precision_at_100_std
value: 6.9126
- type: nauc_precision_at_100_diff1
value: -2.2487
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 100.0
- type: nauc_mrr_at_3_max
value: 52.9714
- type: nauc_mrr_at_3_std
value: 62.1425
- type: nauc_mrr_at_3_diff1
value: 49.1278
- type: nauc_mrr_at_5_max
value: 30.0502
- type: nauc_mrr_at_5_std
value: 53.7191
- type: nauc_mrr_at_5_diff1
value: 29.7903
- type: nauc_mrr_at_10_max
value: 28.0566
- type: nauc_mrr_at_10_std
value: 37.3678
- type: nauc_mrr_at_10_diff1
value: 19.3192
- type: nauc_mrr_at_20_max
value: 24.929499999999997
- type: nauc_mrr_at_20_std
value: 34.0077
- type: nauc_mrr_at_20_diff1
value: 14.304
- type: nauc_mrr_at_100_max
value: 21.8729
- type: nauc_mrr_at_100_std
value: 27.860000000000003
- type: nauc_mrr_at_100_diff1
value: 15.3385
- type: nauc_mrr_at_1000_max
value: 22.311700000000002
- type: nauc_mrr_at_1000_std
value: 28.900100000000002
- type: nauc_mrr_at_1000_diff1
value: 16.1893
- type: main_score
value: 3.001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.966
- type: ndcg_at_3
value: 2.122
- type: ndcg_at_5
value: 3.3070000000000004
- type: ndcg_at_10
value: 4.409
- type: ndcg_at_20
value: 5.734
- type: ndcg_at_100
value: 14.12
- type: ndcg_at_1000
value: 19.293
- type: map_at_1
value: 0.966
- type: map_at_3
value: 1.8519999999999999
- type: map_at_5
value: 2.504
- type: map_at_10
value: 2.965
- type: map_at_20
value: 3.318
- type: map_at_100
value: 4.249
- type: map_at_1000
value: 4.522
- type: recall_at_1
value: 0.966
- type: recall_at_3
value: 2.899
- type: recall_at_5
value: 5.797
- type: recall_at_10
value: 9.179
- type: recall_at_20
value: 14.493
- type: recall_at_100
value: 63.285000000000004
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.966
- type: precision_at_3
value: 0.966
- type: precision_at_5
value: 1.159
- type: precision_at_10
value: 0.918
- type: precision_at_20
value: 0.7250000000000001
- type: precision_at_100
value: 0.633
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.9662000000000001
- type: mrr_at_3
value: 1.8519
- type: mrr_at_5
value: 2.504
- type: mrr_at_10
value: 2.9648999999999996
- type: mrr_at_20
value: 3.3182000000000005
- type: mrr_at_100
value: 4.249
- type: mrr_at_1000
value: 4.5216
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 54.942
- type: nauc_ndcg_at_3_max
value: 49.4196
- type: nauc_ndcg_at_3_std
value: 56.1838
- type: nauc_ndcg_at_3_diff1
value: 32.665499999999994
- type: nauc_ndcg_at_5_max
value: 40.9893
- type: nauc_ndcg_at_5_std
value: 47.916799999999995
- type: nauc_ndcg_at_5_diff1
value: 15.5136
- type: nauc_ndcg_at_10_max
value: 29.115299999999998
- type: nauc_ndcg_at_10_std
value: 32.858
- type: nauc_ndcg_at_10_diff1
value: 17.005300000000002
- type: nauc_ndcg_at_20_max
value: 31.2368
- type: nauc_ndcg_at_20_std
value: 21.3015
- type: nauc_ndcg_at_20_diff1
value: 18.6284
- type: nauc_ndcg_at_100_max
value: 25.645400000000002
- type: nauc_ndcg_at_100_std
value: 12.3866
- type: nauc_ndcg_at_100_diff1
value: 10.502
- type: nauc_ndcg_at_1000_max
value: 33.4067
- type: nauc_ndcg_at_1000_std
value: 24.5891
- type: nauc_ndcg_at_1000_diff1
value: 15.9563
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 54.942
- type: nauc_map_at_3_max
value: 56.2303
- type: nauc_map_at_3_std
value: 62.7938
- type: nauc_map_at_3_diff1
value: 35.7282
- type: nauc_map_at_5_max
value: 48.2731
- type: nauc_map_at_5_std
value: 55.2495
- type: nauc_map_at_5_diff1
value: 22.6228
- type: nauc_map_at_10_max
value: 39.508700000000005
- type: nauc_map_at_10_std
value: 44.6957
- type: nauc_map_at_10_diff1
value: 22.8637
- type: nauc_map_at_20_max
value: 39.6895
- type: nauc_map_at_20_std
value: 38.8865
- type: nauc_map_at_20_diff1
value: 23.1892
- type: nauc_map_at_100_max
value: 38.5582
- type: nauc_map_at_100_std
value: 35.4221
- type: nauc_map_at_100_diff1
value: 20.6822
- type: nauc_map_at_1000_max
value: 39.5093
- type: nauc_map_at_1000_std
value: 36.8263
- type: nauc_map_at_1000_diff1
value: 21.2755
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 54.942
- type: nauc_recall_at_3_max
value: 36.7448
- type: nauc_recall_at_3_std
value: 43.7074
- type: nauc_recall_at_3_diff1
value: 26.950200000000002
- type: nauc_recall_at_5_max
value: 31.4159
- type: nauc_recall_at_5_std
value: 38.074200000000005
- type: nauc_recall_at_5_diff1
value: 5.5841
- type: nauc_recall_at_10_max
value: 17.8359
- type: nauc_recall_at_10_std
value: 19.564799999999998
- type: nauc_recall_at_10_diff1
value: 10.7378
- type: nauc_recall_at_20_max
value: 24.5378
- type: nauc_recall_at_20_std
value: 3.8707
- type: nauc_recall_at_20_diff1
value: 15.1151
- type: nauc_recall_at_100_max
value: 12.8051
- type: nauc_recall_at_100_std
value: -9.097900000000001
- type: nauc_recall_at_100_diff1
value: 0.7080000000000001
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 54.942
- type: nauc_precision_at_3_max
value: 36.7448
- type: nauc_precision_at_3_std
value: 43.7074
- type: nauc_precision_at_3_diff1
value: 26.950200000000002
- type: nauc_precision_at_5_max
value: 31.4159
- type: nauc_precision_at_5_std
value: 38.074200000000005
- type: nauc_precision_at_5_diff1
value: 5.5841
- type: nauc_precision_at_10_max
value: 17.8359
- type: nauc_precision_at_10_std
value: 19.564799999999998
- type: nauc_precision_at_10_diff1
value: 10.7378
- type: nauc_precision_at_20_max
value: 24.5378
- type: nauc_precision_at_20_std
value: 3.8707
- type: nauc_precision_at_20_diff1
value: 15.1151
- type: nauc_precision_at_100_max
value: 12.8051
- type: nauc_precision_at_100_std
value: -9.097900000000001
- type: nauc_precision_at_100_diff1
value: 0.7080000000000001
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 54.942
- type: nauc_mrr_at_3_max
value: 56.2303
- type: nauc_mrr_at_3_std
value: 62.7938
- type: nauc_mrr_at_3_diff1
value: 35.7282
- type: nauc_mrr_at_5_max
value: 48.2731
- type: nauc_mrr_at_5_std
value: 55.2495
- type: nauc_mrr_at_5_diff1
value: 22.6228
- type: nauc_mrr_at_10_max
value: 39.508700000000005
- type: nauc_mrr_at_10_std
value: 44.6957
- type: nauc_mrr_at_10_diff1
value: 22.8637
- type: nauc_mrr_at_20_max
value: 39.6895
- type: nauc_mrr_at_20_std
value: 38.8865
- type: nauc_mrr_at_20_diff1
value: 23.1892
- type: nauc_mrr_at_100_max
value: 38.5582
- type: nauc_mrr_at_100_std
value: 35.4221
- type: nauc_mrr_at_100_diff1
value: 20.6822
- type: nauc_mrr_at_1000_max
value: 39.5093
- type: nauc_mrr_at_1000_std
value: 36.8263
- type: nauc_mrr_at_1000_diff1
value: 21.2755
- type: main_score
value: 4.409
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.774
- type: ndcg_at_3
value: 1.745
- type: ndcg_at_5
value: 2.2030000000000003
- type: ndcg_at_10
value: 2.635
- type: ndcg_at_20
value: 3.514
- type: ndcg_at_100
value: 8.031
- type: ndcg_at_1000
value: 16.525000000000002
- type: map_at_1
value: 0.774
- type: map_at_3
value: 1.4829999999999999
- type: map_at_5
value: 1.725
- type: map_at_10
value: 1.9
- type: map_at_20
value: 2.1399999999999997
- type: map_at_100
value: 2.71
- type: map_at_1000
value: 3.0220000000000002
- type: recall_at_1
value: 0.774
- type: recall_at_3
value: 2.5149999999999997
- type: recall_at_5
value: 3.675
- type: recall_at_10
value: 5.029
- type: recall_at_20
value: 8.511000000000001
- type: recall_at_100
value: 33.656000000000006
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.774
- type: precision_at_3
value: 0.8380000000000001
- type: precision_at_5
value: 0.735
- type: precision_at_10
value: 0.503
- type: precision_at_20
value: 0.426
- type: precision_at_100
value: 0.337
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.7736999999999999
- type: mrr_at_3
value: 1.4829
- type: mrr_at_5
value: 1.7247
- type: mrr_at_10
value: 1.8998000000000002
- type: mrr_at_20
value: 2.1399999999999997
- type: mrr_at_100
value: 2.71
- type: mrr_at_1000
value: 3.0224
- type: nauc_ndcg_at_1_max
value: 60.5507
- type: nauc_ndcg_at_1_std
value: 17.7109
- type: nauc_ndcg_at_1_diff1
value: 69.8508
- type: nauc_ndcg_at_3_max
value: 17.8387
- type: nauc_ndcg_at_3_std
value: -12.759699999999999
- type: nauc_ndcg_at_3_diff1
value: 32.9363
- type: nauc_ndcg_at_5_max
value: 13.933300000000001
- type: nauc_ndcg_at_5_std
value: -7.4468000000000005
- type: nauc_ndcg_at_5_diff1
value: 34.0875
- type: nauc_ndcg_at_10_max
value: 24.0901
- type: nauc_ndcg_at_10_std
value: -1.9087
- type: nauc_ndcg_at_10_diff1
value: 30.859199999999998
- type: nauc_ndcg_at_20_max
value: 14.4843
- type: nauc_ndcg_at_20_std
value: -2.4103
- type: nauc_ndcg_at_20_diff1
value: 25.251800000000003
- type: nauc_ndcg_at_100_max
value: 11.147400000000001
- type: nauc_ndcg_at_100_std
value: 0.5721
- type: nauc_ndcg_at_100_diff1
value: 18.865499999999997
- type: nauc_ndcg_at_1000_max
value: 14.3921
- type: nauc_ndcg_at_1000_std
value: -1.4730999999999999
- type: nauc_ndcg_at_1000_diff1
value: 23.5761
- type: nauc_map_at_1_max
value: 60.5507
- type: nauc_map_at_1_std
value: 17.7109
- type: nauc_map_at_1_diff1
value: 69.8508
- type: nauc_map_at_3_max
value: 23.5728
- type: nauc_map_at_3_std
value: -8.4614
- type: nauc_map_at_3_diff1
value: 37.580000000000005
- type: nauc_map_at_5_max
value: 20.072300000000002
- type: nauc_map_at_5_std
value: -5.5798
- type: nauc_map_at_5_diff1
value: 37.894800000000004
- type: nauc_map_at_10_max
value: 25.3164
- type: nauc_map_at_10_std
value: -2.6436
- type: nauc_map_at_10_diff1
value: 35.591
- type: nauc_map_at_20_max
value: 20.962
- type: nauc_map_at_20_std
value: -2.7786999999999997
- type: nauc_map_at_20_diff1
value: 32.562999999999995
- type: nauc_map_at_100_max
value: 19.2988
- type: nauc_map_at_100_std
value: -1.6022
- type: nauc_map_at_100_diff1
value: 30.2483
- type: nauc_map_at_1000_max
value: 19.542399999999997
- type: nauc_map_at_1000_std
value: -1.9428
- type: nauc_map_at_1000_diff1
value: 30.5552
- type: nauc_recall_at_1_max
value: 60.5507
- type: nauc_recall_at_1_std
value: 17.7109
- type: nauc_recall_at_1_diff1
value: 69.8508
- type: nauc_recall_at_3_max
value: 7.9922
- type: nauc_recall_at_3_std
value: -20.188
- type: nauc_recall_at_3_diff1
value: 25.0336
- type: nauc_recall_at_5_max
value: 5.2796
- type: nauc_recall_at_5_std
value: -9.5635
- type: nauc_recall_at_5_diff1
value: 28.912900000000004
- type: nauc_recall_at_10_max
value: 24.0746
- type: nauc_recall_at_10_std
value: 0.1106
- type: nauc_recall_at_10_diff1
value: 25.271
- type: nauc_recall_at_20_max
value: 8.2207
- type: nauc_recall_at_20_std
value: -1.5499
- type: nauc_recall_at_20_diff1
value: 18.351200000000002
- type: nauc_recall_at_100_max
value: 6.2993
- type: nauc_recall_at_100_std
value: 2.1907
- type: nauc_recall_at_100_diff1
value: 11.477
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 60.5507
- type: nauc_precision_at_1_std
value: 17.7109
- type: nauc_precision_at_1_diff1
value: 69.8508
- type: nauc_precision_at_3_max
value: 7.9922
- type: nauc_precision_at_3_std
value: -20.188
- type: nauc_precision_at_3_diff1
value: 25.0336
- type: nauc_precision_at_5_max
value: 5.2796
- type: nauc_precision_at_5_std
value: -9.5635
- type: nauc_precision_at_5_diff1
value: 28.912900000000004
- type: nauc_precision_at_10_max
value: 24.0746
- type: nauc_precision_at_10_std
value: 0.1106
- type: nauc_precision_at_10_diff1
value: 25.271
- type: nauc_precision_at_20_max
value: 8.2207
- type: nauc_precision_at_20_std
value: -1.5499
- type: nauc_precision_at_20_diff1
value: 18.351200000000002
- type: nauc_precision_at_100_max
value: 6.2993
- type: nauc_precision_at_100_std
value: 2.1907
- type: nauc_precision_at_100_diff1
value: 11.477
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 60.5507
- type: nauc_mrr_at_1_std
value: 17.7109
- type: nauc_mrr_at_1_diff1
value: 69.8508
- type: nauc_mrr_at_3_max
value: 23.5728
- type: nauc_mrr_at_3_std
value: -8.4614
- type: nauc_mrr_at_3_diff1
value: 37.580000000000005
- type: nauc_mrr_at_5_max
value: 20.072300000000002
- type: nauc_mrr_at_5_std
value: -5.5798
- type: nauc_mrr_at_5_diff1
value: 37.894800000000004
- type: nauc_mrr_at_10_max
value: 25.3164
- type: nauc_mrr_at_10_std
value: -2.6436
- type: nauc_mrr_at_10_diff1
value: 35.591
- type: nauc_mrr_at_20_max
value: 20.962
- type: nauc_mrr_at_20_std
value: -2.7786999999999997
- type: nauc_mrr_at_20_diff1
value: 32.562999999999995
- type: nauc_mrr_at_100_max
value: 19.2988
- type: nauc_mrr_at_100_std
value: -1.6022
- type: nauc_mrr_at_100_diff1
value: 30.2483
- type: nauc_mrr_at_1000_max
value: 19.542399999999997
- type: nauc_mrr_at_1000_std
value: -1.9428
- type: nauc_mrr_at_1000_diff1
value: 30.5552
- type: main_score
value: 2.635
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.863
- type: ndcg_at_3
value: 3.66
- type: ndcg_at_5
value: 4.168
- type: ndcg_at_10
value: 5.173
- type: ndcg_at_20
value: 7.7090000000000005
- type: ndcg_at_100
value: 17.645
- type: ndcg_at_1000
value: 21.322
- type: map_at_1
value: 1.863
- type: map_at_3
value: 3.209
- type: map_at_5
value: 3.489
- type: map_at_10
value: 3.904
- type: map_at_20
value: 4.612
- type: map_at_100
value: 5.858
- type: map_at_1000
value: 6.069999999999999
- type: recall_at_1
value: 1.863
- type: recall_at_3
value: 4.968999999999999
- type: recall_at_5
value: 6.211
- type: recall_at_10
value: 9.317
- type: recall_at_20
value: 19.255
- type: recall_at_100
value: 74.534
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.863
- type: precision_at_3
value: 1.656
- type: precision_at_5
value: 1.242
- type: precision_at_10
value: 0.932
- type: precision_at_20
value: 0.963
- type: precision_at_100
value: 0.745
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.8634000000000002
- type: mrr_at_3
value: 3.2091000000000003
- type: mrr_at_5
value: 3.4886
- type: mrr_at_10
value: 3.9044000000000003
- type: mrr_at_20
value: 4.612299999999999
- type: mrr_at_100
value: 5.8578
- type: mrr_at_1000
value: 6.0696
- type: nauc_ndcg_at_1_max
value: 59.8106
- type: nauc_ndcg_at_1_std
value: 41.6091
- type: nauc_ndcg_at_1_diff1
value: 15.8988
- type: nauc_ndcg_at_3_max
value: 17.326900000000002
- type: nauc_ndcg_at_3_std
value: 0.8758
- type: nauc_ndcg_at_3_diff1
value: -13.537199999999999
- type: nauc_ndcg_at_5_max
value: 17.0792
- type: nauc_ndcg_at_5_std
value: -4.134
- type: nauc_ndcg_at_5_diff1
value: -14.3938
- type: nauc_ndcg_at_10_max
value: 19.2218
- type: nauc_ndcg_at_10_std
value: -4.1131
- type: nauc_ndcg_at_10_diff1
value: -0.5739
- type: nauc_ndcg_at_20_max
value: 14.7981
- type: nauc_ndcg_at_20_std
value: -0.0645
- type: nauc_ndcg_at_20_diff1
value: -1.8365
- type: nauc_ndcg_at_100_max
value: 20.259
- type: nauc_ndcg_at_100_std
value: 3.2459000000000002
- type: nauc_ndcg_at_100_diff1
value: -3.5298999999999996
- type: nauc_ndcg_at_1000_max
value: 18.958
- type: nauc_ndcg_at_1000_std
value: 2.0313999999999997
- type: nauc_ndcg_at_1000_diff1
value: -3.6224
- type: nauc_map_at_1_max
value: 59.8106
- type: nauc_map_at_1_std
value: 41.6091
- type: nauc_map_at_1_diff1
value: 15.8988
- type: nauc_map_at_3_max
value: 23.4457
- type: nauc_map_at_3_std
value: 6.589200000000001
- type: nauc_map_at_3_diff1
value: -9.1205
- type: nauc_map_at_5_max
value: 23.0402
- type: nauc_map_at_5_std
value: 2.8784
- type: nauc_map_at_5_diff1
value: -10.0377
- type: nauc_map_at_10_max
value: 23.477
- type: nauc_map_at_10_std
value: 1.9317999999999997
- type: nauc_map_at_10_diff1
value: -3.1433000000000004
- type: nauc_map_at_20_max
value: 21.138199999999998
- type: nauc_map_at_20_std
value: 3.3765000000000005
- type: nauc_map_at_20_diff1
value: -3.2526
- type: nauc_map_at_100_max
value: 21.8857
- type: nauc_map_at_100_std
value: 4.147
- type: nauc_map_at_100_diff1
value: -3.5649
- type: nauc_map_at_1000_max
value: 21.8479
- type: nauc_map_at_1000_std
value: 4.0359
- type: nauc_map_at_1000_diff1
value: -3.5894000000000004
- type: nauc_recall_at_1_max
value: 59.8106
- type: nauc_recall_at_1_std
value: 41.6091
- type: nauc_recall_at_1_diff1
value: 15.8988
- type: nauc_recall_at_3_max
value: 5.8776
- type: nauc_recall_at_3_std
value: -9.775
- type: nauc_recall_at_3_diff1
value: -21.8474
- type: nauc_recall_at_5_max
value: 7.184799999999999
- type: nauc_recall_at_5_std
value: -15.965399999999999
- type: nauc_recall_at_5_diff1
value: -21.5915
- type: nauc_recall_at_10_max
value: 14.3481
- type: nauc_recall_at_10_std
value: -11.5027
- type: nauc_recall_at_10_diff1
value: 5.0225
- type: nauc_recall_at_20_max
value: 8.8023
- type: nauc_recall_at_20_std
value: -2.2973
- type: nauc_recall_at_20_diff1
value: 0.2097
- type: nauc_recall_at_100_max
value: 23.613799999999998
- type: nauc_recall_at_100_std
value: 5.728599999999999
- type: nauc_recall_at_100_diff1
value: -3.4857
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 59.8106
- type: nauc_precision_at_1_std
value: 41.6091
- type: nauc_precision_at_1_diff1
value: 15.8988
- type: nauc_precision_at_3_max
value: 5.8776
- type: nauc_precision_at_3_std
value: -9.775
- type: nauc_precision_at_3_diff1
value: -21.8474
- type: nauc_precision_at_5_max
value: 7.184799999999999
- type: nauc_precision_at_5_std
value: -15.965399999999999
- type: nauc_precision_at_5_diff1
value: -21.5915
- type: nauc_precision_at_10_max
value: 14.3481
- type: nauc_precision_at_10_std
value: -11.5027
- type: nauc_precision_at_10_diff1
value: 5.0225
- type: nauc_precision_at_20_max
value: 8.8023
- type: nauc_precision_at_20_std
value: -2.2973
- type: nauc_precision_at_20_diff1
value: 0.2097
- type: nauc_precision_at_100_max
value: 23.613799999999998
- type: nauc_precision_at_100_std
value: 5.728599999999999
- type: nauc_precision_at_100_diff1
value: -3.4857
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 59.8106
- type: nauc_mrr_at_1_std
value: 41.6091
- type: nauc_mrr_at_1_diff1
value: 15.8988
- type: nauc_mrr_at_3_max
value: 23.4457
- type: nauc_mrr_at_3_std
value: 6.589200000000001
- type: nauc_mrr_at_3_diff1
value: -9.1205
- type: nauc_mrr_at_5_max
value: 23.0402
- type: nauc_mrr_at_5_std
value: 2.8784
- type: nauc_mrr_at_5_diff1
value: -10.0377
- type: nauc_mrr_at_10_max
value: 23.477
- type: nauc_mrr_at_10_std
value: 1.9317999999999997
- type: nauc_mrr_at_10_diff1
value: -3.1433000000000004
- type: nauc_mrr_at_20_max
value: 21.138199999999998
- type: nauc_mrr_at_20_std
value: 3.3765000000000005
- type: nauc_mrr_at_20_diff1
value: -3.2526
- type: nauc_mrr_at_100_max
value: 21.8857
- type: nauc_mrr_at_100_std
value: 4.147
- type: nauc_mrr_at_100_diff1
value: -3.5649
- type: nauc_mrr_at_1000_max
value: 21.8479
- type: nauc_mrr_at_1000_std
value: 4.0359
- type: nauc_mrr_at_1000_diff1
value: -3.5894000000000004
- type: main_score
value: 5.173
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.538
- type: ndcg_at_3
value: 2.3619999999999997
- type: ndcg_at_5
value: 3.496
- type: ndcg_at_10
value: 4.166
- type: ndcg_at_20
value: 5.763
- type: ndcg_at_100
value: 16.819
- type: ndcg_at_1000
value: 20.063
- type: map_at_1
value: 0.538
- type: map_at_3
value: 1.882
- type: map_at_5
value: 2.527
- type: map_at_10
value: 2.79
- type: map_at_20
value: 3.2079999999999997
- type: map_at_100
value: 4.555
- type: map_at_1000
value: 4.7379999999999995
- type: recall_at_1
value: 0.538
- type: recall_at_3
value: 3.763
- type: recall_at_5
value: 6.451999999999999
- type: recall_at_10
value: 8.602
- type: recall_at_20
value: 15.054
- type: recall_at_100
value: 77.41900000000001
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.538
- type: precision_at_3
value: 1.254
- type: precision_at_5
value: 1.29
- type: precision_at_10
value: 0.86
- type: precision_at_20
value: 0.753
- type: precision_at_100
value: 0.774
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.5376
- type: mrr_at_3
value: 1.8817
- type: mrr_at_5
value: 2.5269
- type: mrr_at_10
value: 2.7897000000000003
- type: mrr_at_20
value: 3.2081999999999997
- type: mrr_at_100
value: 4.554600000000001
- type: mrr_at_1000
value: 4.7382
- type: nauc_ndcg_at_1_max
value: 100.0
- type: nauc_ndcg_at_1_std
value: 66.7257
- type: nauc_ndcg_at_1_diff1
value: 100.0
- type: nauc_ndcg_at_3_max
value: 29.630000000000003
- type: nauc_ndcg_at_3_std
value: 57.101400000000005
- type: nauc_ndcg_at_3_diff1
value: 22.5155
- type: nauc_ndcg_at_5_max
value: 8.1457
- type: nauc_ndcg_at_5_std
value: 43.9017
- type: nauc_ndcg_at_5_diff1
value: 12.2764
- type: nauc_ndcg_at_10_max
value: 10.8742
- type: nauc_ndcg_at_10_std
value: 35.634100000000004
- type: nauc_ndcg_at_10_diff1
value: 16.8804
- type: nauc_ndcg_at_20_max
value: 8.2366
- type: nauc_ndcg_at_20_std
value: 34.4244
- type: nauc_ndcg_at_20_diff1
value: 10.3725
- type: nauc_ndcg_at_100_max
value: 7.661900000000001
- type: nauc_ndcg_at_100_std
value: 24.1541
- type: nauc_ndcg_at_100_diff1
value: 8.6735
- type: nauc_ndcg_at_1000_max
value: 9.024899999999999
- type: nauc_ndcg_at_1000_std
value: 31.385099999999998
- type: nauc_ndcg_at_1000_diff1
value: 11.6807
- type: nauc_map_at_1_max
value: 100.0
- type: nauc_map_at_1_std
value: 66.7257
- type: nauc_map_at_1_diff1
value: 100.0
- type: nauc_map_at_3_max
value: 37.627500000000005
- type: nauc_map_at_3_std
value: 59.4071
- type: nauc_map_at_3_diff1
value: 27.9837
- type: nauc_map_at_5_max
value: 18.7887
- type: nauc_map_at_5_std
value: 48.7344
- type: nauc_map_at_5_diff1
value: 18.7448
- type: nauc_map_at_10_max
value: 19.7517
- type: nauc_map_at_10_std
value: 43.2046
- type: nauc_map_at_10_diff1
value: 21.3488
- type: nauc_map_at_20_max
value: 17.3749
- type: nauc_map_at_20_std
value: 41.8178
- type: nauc_map_at_20_diff1
value: 17.8946
- type: nauc_map_at_100_max
value: 15.4
- type: nauc_map_at_100_std
value: 37.7516
- type: nauc_map_at_100_diff1
value: 16.4172
- type: nauc_map_at_1000_max
value: 15.743099999999998
- type: nauc_map_at_1000_std
value: 38.642700000000005
- type: nauc_map_at_1000_diff1
value: 16.8576
- type: nauc_recall_at_1_max
value: 100.0
- type: nauc_recall_at_1_std
value: 66.7257
- type: nauc_recall_at_1_diff1
value: 100.0
- type: nauc_recall_at_3_max
value: 17.4401
- type: nauc_recall_at_3_std
value: 53.4353
- type: nauc_recall_at_3_diff1
value: 14.5988
- type: nauc_recall_at_5_max
value: -5.2527
- type: nauc_recall_at_5_std
value: 37.5174
- type: nauc_recall_at_5_diff1
value: 4.3982
- type: nauc_recall_at_10_max
value: 1.6920000000000002
- type: nauc_recall_at_10_std
value: 26.655299999999997
- type: nauc_recall_at_10_diff1
value: 12.6153
- type: nauc_recall_at_20_max
value: 1.2351
- type: nauc_recall_at_20_std
value: 28.0528
- type: nauc_recall_at_20_diff1
value: 3.728
- type: nauc_recall_at_100_max
value: 4.7833
- type: nauc_recall_at_100_std
value: 8.0403
- type: nauc_recall_at_100_diff1
value: 2.0422
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 100.0
- type: nauc_precision_at_1_std
value: 66.7257
- type: nauc_precision_at_1_diff1
value: 100.0
- type: nauc_precision_at_3_max
value: 17.4401
- type: nauc_precision_at_3_std
value: 53.4353
- type: nauc_precision_at_3_diff1
value: 14.5988
- type: nauc_precision_at_5_max
value: -5.2527
- type: nauc_precision_at_5_std
value: 37.5174
- type: nauc_precision_at_5_diff1
value: 4.3982
- type: nauc_precision_at_10_max
value: 1.6920000000000002
- type: nauc_precision_at_10_std
value: 26.655299999999997
- type: nauc_precision_at_10_diff1
value: 12.6153
- type: nauc_precision_at_20_max
value: 1.2351
- type: nauc_precision_at_20_std
value: 28.0528
- type: nauc_precision_at_20_diff1
value: 3.728
- type: nauc_precision_at_100_max
value: 4.7833
- type: nauc_precision_at_100_std
value: 8.0403
- type: nauc_precision_at_100_diff1
value: 2.0422
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 100.0
- type: nauc_mrr_at_1_std
value: 66.7257
- type: nauc_mrr_at_1_diff1
value: 100.0
- type: nauc_mrr_at_3_max
value: 37.627500000000005
- type: nauc_mrr_at_3_std
value: 59.4071
- type: nauc_mrr_at_3_diff1
value: 27.9837
- type: nauc_mrr_at_5_max
value: 18.7887
- type: nauc_mrr_at_5_std
value: 48.7344
- type: nauc_mrr_at_5_diff1
value: 18.7448
- type: nauc_mrr_at_10_max
value: 19.7517
- type: nauc_mrr_at_10_std
value: 43.2046
- type: nauc_mrr_at_10_diff1
value: 21.3488
- type: nauc_mrr_at_20_max
value: 17.3749
- type: nauc_mrr_at_20_std
value: 41.8178
- type: nauc_mrr_at_20_diff1
value: 17.8946
- type: nauc_mrr_at_100_max
value: 15.4
- type: nauc_mrr_at_100_std
value: 37.7516
- type: nauc_mrr_at_100_diff1
value: 16.4172
- type: nauc_mrr_at_1000_max
value: 15.743099999999998
- type: nauc_mrr_at_1000_std
value: 38.642700000000005
- type: nauc_mrr_at_1000_diff1
value: 16.8576
- type: main_score
value: 4.166
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.0
- type: ndcg_at_3
value: 0.694
- type: ndcg_at_5
value: 1.222
- type: ndcg_at_10
value: 2.809
- type: ndcg_at_20
value: 5.146
- type: ndcg_at_100
value: 14.91
- type: ndcg_at_1000
value: 18.864
- type: map_at_1
value: 0.0
- type: map_at_3
value: 0.511
- type: map_at_5
value: 0.818
- type: map_at_10
value: 1.47
- type: map_at_20
value: 2.12
- type: map_at_100
value: 3.2649999999999997
- type: map_at_1000
value: 3.485
- type: recall_at_1
value: 0.0
- type: recall_at_3
value: 1.2269999999999999
- type: recall_at_5
value: 2.4539999999999997
- type: recall_at_10
value: 7.362
- type: recall_at_20
value: 16.564
- type: recall_at_100
value: 72.393
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 0.0
- type: precision_at_3
value: 0.409
- type: precision_at_5
value: 0.49100000000000005
- type: precision_at_10
value: 0.736
- type: precision_at_20
value: 0.828
- type: precision_at_100
value: 0.724
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 0.0
- type: mrr_at_3
value: 0.5112
- type: mrr_at_5
value: 0.818
- type: mrr_at_10
value: 1.4704
- type: mrr_at_20
value: 2.12
- type: mrr_at_100
value: 3.2646
- type: mrr_at_1000
value: 3.4854999999999996
- type: nauc_ndcg_at_1_max
value: .nan
- type: nauc_ndcg_at_1_std
value: .nan
- type: nauc_ndcg_at_1_diff1
value: .nan
- type: nauc_ndcg_at_3_max
value: -7.0496
- type: nauc_ndcg_at_3_std
value: -32.1514
- type: nauc_ndcg_at_3_diff1
value: -18.6811
- type: nauc_ndcg_at_5_max
value: 13.1797
- type: nauc_ndcg_at_5_std
value: -24.1903
- type: nauc_ndcg_at_5_diff1
value: -29.849500000000003
- type: nauc_ndcg_at_10_max
value: 27.9005
- type: nauc_ndcg_at_10_std
value: -17.3769
- type: nauc_ndcg_at_10_diff1
value: -12.732299999999999
- type: nauc_ndcg_at_20_max
value: 21.567700000000002
- type: nauc_ndcg_at_20_std
value: -4.7954
- type: nauc_ndcg_at_20_diff1
value: -11.060599999999999
- type: nauc_ndcg_at_100_max
value: 11.6238
- type: nauc_ndcg_at_100_std
value: -5.933999999999999
- type: nauc_ndcg_at_100_diff1
value: -2.0311
- type: nauc_ndcg_at_1000_max
value: 17.6537
- type: nauc_ndcg_at_1000_std
value: -8.9981
- type: nauc_ndcg_at_1000_diff1
value: -5.7923
- type: nauc_map_at_1_max
value: .nan
- type: nauc_map_at_1_std
value: .nan
- type: nauc_map_at_1_diff1
value: .nan
- type: nauc_map_at_3_max
value: -8.3328
- type: nauc_map_at_3_std
value: -33.029399999999995
- type: nauc_map_at_3_diff1
value: -20.842299999999998
- type: nauc_map_at_5_max
value: 9.694600000000001
- type: nauc_map_at_5_std
value: -25.795
- type: nauc_map_at_5_diff1
value: -29.718899999999998
- type: nauc_map_at_10_max
value: 24.2406
- type: nauc_map_at_10_std
value: -19.192899999999998
- type: nauc_map_at_10_diff1
value: -16.1405
- type: nauc_map_at_20_max
value: 20.515800000000002
- type: nauc_map_at_20_std
value: -10.6617
- type: nauc_map_at_20_diff1
value: -14.4404
- type: nauc_map_at_100_max
value: 17.603099999999998
- type: nauc_map_at_100_std
value: -11.405
- type: nauc_map_at_100_diff1
value: -9.4802
- type: nauc_map_at_1000_max
value: 18.4729
- type: nauc_map_at_1000_std
value: -11.7628
- type: nauc_map_at_1000_diff1
value: -10.1215
- type: nauc_recall_at_1_max
value: .nan
- type: nauc_recall_at_1_std
value: .nan
- type: nauc_recall_at_1_diff1
value: .nan
- type: nauc_recall_at_3_max
value: -5.286
- type: nauc_recall_at_3_std
value: -30.9445
- type: nauc_recall_at_3_diff1
value: -15.7106
- type: nauc_recall_at_5_max
value: 17.227
- type: nauc_recall_at_5_std
value: -22.3411
- type: nauc_recall_at_5_diff1
value: -30.111900000000002
- type: nauc_recall_at_10_max
value: 30.406
- type: nauc_recall_at_10_std
value: -16.0824
- type: nauc_recall_at_10_diff1
value: -9.9285
- type: nauc_recall_at_20_max
value: 21.794900000000002
- type: nauc_recall_at_20_std
value: -0.7081
- type: nauc_recall_at_20_diff1
value: -8.8937
- type: nauc_recall_at_100_max
value: 3.2778
- type: nauc_recall_at_100_std
value: -0.6836
- type: nauc_recall_at_100_diff1
value: 3.6675
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: .nan
- type: nauc_precision_at_1_std
value: .nan
- type: nauc_precision_at_1_diff1
value: .nan
- type: nauc_precision_at_3_max
value: -5.286
- type: nauc_precision_at_3_std
value: -30.9445
- type: nauc_precision_at_3_diff1
value: -15.7106
- type: nauc_precision_at_5_max
value: 17.227
- type: nauc_precision_at_5_std
value: -22.3411
- type: nauc_precision_at_5_diff1
value: -30.111900000000002
- type: nauc_precision_at_10_max
value: 30.406
- type: nauc_precision_at_10_std
value: -16.0824
- type: nauc_precision_at_10_diff1
value: -9.9285
- type: nauc_precision_at_20_max
value: 21.794900000000002
- type: nauc_precision_at_20_std
value: -0.7081
- type: nauc_precision_at_20_diff1
value: -8.8937
- type: nauc_precision_at_100_max
value: 3.2778
- type: nauc_precision_at_100_std
value: -0.6836
- type: nauc_precision_at_100_diff1
value: 3.6675
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: .nan
- type: nauc_mrr_at_1_std
value: .nan
- type: nauc_mrr_at_1_diff1
value: .nan
- type: nauc_mrr_at_3_max
value: -8.3328
- type: nauc_mrr_at_3_std
value: -33.029399999999995
- type: nauc_mrr_at_3_diff1
value: -20.842299999999998
- type: nauc_mrr_at_5_max
value: 9.694600000000001
- type: nauc_mrr_at_5_std
value: -25.795
- type: nauc_mrr_at_5_diff1
value: -29.718899999999998
- type: nauc_mrr_at_10_max
value: 24.2406
- type: nauc_mrr_at_10_std
value: -19.192899999999998
- type: nauc_mrr_at_10_diff1
value: -16.1405
- type: nauc_mrr_at_20_max
value: 20.515800000000002
- type: nauc_mrr_at_20_std
value: -10.6617
- type: nauc_mrr_at_20_diff1
value: -14.4404
- type: nauc_mrr_at_100_max
value: 17.603099999999998
- type: nauc_mrr_at_100_std
value: -11.405
- type: nauc_mrr_at_100_diff1
value: -9.4802
- type: nauc_mrr_at_1000_max
value: 18.4729
- type: nauc_mrr_at_1000_std
value: -11.7628
- type: nauc_mrr_at_1000_diff1
value: -10.1215
- type: main_score
value: 2.809
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 1.5959999999999999
- type: ndcg_at_3
value: 2.869
- type: ndcg_at_5
value: 3.3029999999999995
- type: ndcg_at_10
value: 5.124
- type: ndcg_at_20
value: 6.805
- type: ndcg_at_100
value: 14.495
- type: ndcg_at_1000
value: 19.941
- type: map_at_1
value: 1.5959999999999999
- type: map_at_3
value: 2.571
- type: map_at_5
value: 2.81
- type: map_at_10
value: 3.5220000000000002
- type: map_at_20
value: 3.948
- type: map_at_100
value: 4.8309999999999995
- type: map_at_1000
value: 5.128
- type: recall_at_1
value: 1.5959999999999999
- type: recall_at_3
value: 3.723
- type: recall_at_5
value: 4.787
- type: recall_at_10
value: 10.638
- type: recall_at_20
value: 17.553
- type: recall_at_100
value: 61.702
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 1.5959999999999999
- type: precision_at_3
value: 1.2409999999999999
- type: precision_at_5
value: 0.9570000000000001
- type: precision_at_10
value: 1.064
- type: precision_at_20
value: 0.878
- type: precision_at_100
value: 0.617
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 1.5957
- type: mrr_at_3
value: 2.5709
- type: mrr_at_5
value: 2.8103
- type: mrr_at_10
value: 3.5216
- type: mrr_at_20
value: 3.9482999999999997
- type: mrr_at_100
value: 4.8312
- type: mrr_at_1000
value: 5.1277
- type: nauc_ndcg_at_1_max
value: 25.9707
- type: nauc_ndcg_at_1_std
value: 25.9707
- type: nauc_ndcg_at_1_diff1
value: 88.7908
- type: nauc_ndcg_at_3_max
value: 8.0769
- type: nauc_ndcg_at_3_std
value: -1.4973999999999998
- type: nauc_ndcg_at_3_diff1
value: 66.1072
- type: nauc_ndcg_at_5_max
value: 8.4885
- type: nauc_ndcg_at_5_std
value: 1.5889
- type: nauc_ndcg_at_5_diff1
value: 55.131699999999995
- type: nauc_ndcg_at_10_max
value: 4.4135
- type: nauc_ndcg_at_10_std
value: -2.4915
- type: nauc_ndcg_at_10_diff1
value: 40.2008
- type: nauc_ndcg_at_20_max
value: 5.2495
- type: nauc_ndcg_at_20_std
value: -6.4857
- type: nauc_ndcg_at_20_diff1
value: 30.0024
- type: nauc_ndcg_at_100_max
value: 15.6634
- type: nauc_ndcg_at_100_std
value: -2.1768
- type: nauc_ndcg_at_100_diff1
value: 25.4728
- type: nauc_ndcg_at_1000_max
value: 10.8195
- type: nauc_ndcg_at_1000_std
value: -0.9631000000000001
- type: nauc_ndcg_at_1000_diff1
value: 37.1256
- type: nauc_map_at_1_max
value: 25.9707
- type: nauc_map_at_1_std
value: 25.9707
- type: nauc_map_at_1_diff1
value: 88.7908
- type: nauc_map_at_3_max
value: 11.2388
- type: nauc_map_at_3_std
value: 2.7731
- type: nauc_map_at_3_diff1
value: 70.1588
- type: nauc_map_at_5_max
value: 11.5213
- type: nauc_map_at_5_std
value: 4.4621
- type: nauc_map_at_5_diff1
value: 62.586
- type: nauc_map_at_10_max
value: 8.664900000000001
- type: nauc_map_at_10_std
value: 0.9982
- type: nauc_map_at_10_diff1
value: 52.0845
- type: nauc_map_at_20_max
value: 8.7285
- type: nauc_map_at_20_std
value: -0.9410999999999999
- type: nauc_map_at_20_diff1
value: 46.8936
- type: nauc_map_at_100_max
value: 11.1619
- type: nauc_map_at_100_std
value: 0.5134
- type: nauc_map_at_100_diff1
value: 45.5704
- type: nauc_map_at_1000_max
value: 10.7283
- type: nauc_map_at_1000_std
value: 0.6891
- type: nauc_map_at_1000_diff1
value: 47.0302
- type: nauc_recall_at_1_max
value: 25.9707
- type: nauc_recall_at_1_std
value: 25.9707
- type: nauc_recall_at_1_diff1
value: 88.7908
- type: nauc_recall_at_3_max
value: 1.6386999999999998
- type: nauc_recall_at_3_std
value: -10.052
- type: nauc_recall_at_3_diff1
value: 57.8468
- type: nauc_recall_at_5_max
value: 3.0700000000000003
- type: nauc_recall_at_5_std
value: -3.0769
- type: nauc_recall_at_5_diff1
value: 41.4621
- type: nauc_recall_at_10_max
value: -0.44349999999999995
- type: nauc_recall_at_10_std
value: -5.8379
- type: nauc_recall_at_10_diff1
value: 26.6638
- type: nauc_recall_at_20_max
value: 2.3823
- type: nauc_recall_at_20_std
value: -11.5308
- type: nauc_recall_at_20_diff1
value: 13.6577
- type: nauc_recall_at_100_max
value: 24.204600000000003
- type: nauc_recall_at_100_std
value: -4.2306
- type: nauc_recall_at_100_diff1
value: 5.4663
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 25.9707
- type: nauc_precision_at_1_std
value: 25.9707
- type: nauc_precision_at_1_diff1
value: 88.7908
- type: nauc_precision_at_3_max
value: 1.6386999999999998
- type: nauc_precision_at_3_std
value: -10.052
- type: nauc_precision_at_3_diff1
value: 57.8468
- type: nauc_precision_at_5_max
value: 3.0700000000000003
- type: nauc_precision_at_5_std
value: -3.0769
- type: nauc_precision_at_5_diff1
value: 41.4621
- type: nauc_precision_at_10_max
value: -0.44349999999999995
- type: nauc_precision_at_10_std
value: -5.8379
- type: nauc_precision_at_10_diff1
value: 26.6638
- type: nauc_precision_at_20_max
value: 2.3823
- type: nauc_precision_at_20_std
value: -11.5308
- type: nauc_precision_at_20_diff1
value: 13.6577
- type: nauc_precision_at_100_max
value: 24.204600000000003
- type: nauc_precision_at_100_std
value: -4.2306
- type: nauc_precision_at_100_diff1
value: 5.4663
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 25.9707
- type: nauc_mrr_at_1_std
value: 25.9707
- type: nauc_mrr_at_1_diff1
value: 88.7908
- type: nauc_mrr_at_3_max
value: 11.2388
- type: nauc_mrr_at_3_std
value: 2.7731
- type: nauc_mrr_at_3_diff1
value: 70.1588
- type: nauc_mrr_at_5_max
value: 11.5213
- type: nauc_mrr_at_5_std
value: 4.4621
- type: nauc_mrr_at_5_diff1
value: 62.586
- type: nauc_mrr_at_10_max
value: 8.664900000000001
- type: nauc_mrr_at_10_std
value: 0.9982
- type: nauc_mrr_at_10_diff1
value: 52.0845
- type: nauc_mrr_at_20_max
value: 8.7285
- type: nauc_mrr_at_20_std
value: -0.9410999999999999
- type: nauc_mrr_at_20_diff1
value: 46.8936
- type: nauc_mrr_at_100_max
value: 11.1619
- type: nauc_mrr_at_100_std
value: 0.5134
- type: nauc_mrr_at_100_diff1
value: 45.5704
- type: nauc_mrr_at_1000_max
value: 10.7283
- type: nauc_mrr_at_1000_std
value: 0.6891
- type: nauc_mrr_at_1000_diff1
value: 47.0302
- type: main_score
value: 5.124
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-ara)
type: facebook/mlqa
config: ara-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 13.145000000000001
- type: ndcg_at_3
value: 17.358
- type: ndcg_at_5
value: 18.838
- type: ndcg_at_10
value: 20.508000000000003
- type: ndcg_at_20
value: 22.025
- type: ndcg_at_100
value: 24.966
- type: ndcg_at_1000
value: 28.415000000000003
- type: map_at_1
value: 13.135
- type: map_at_3
value: 16.292
- type: map_at_5
value: 17.105999999999998
- type: map_at_10
value: 17.793
- type: map_at_20
value: 18.207
- type: map_at_100
value: 18.590999999999998
- type: map_at_1000
value: 18.698999999999998
- type: recall_at_1
value: 13.135
- type: recall_at_3
value: 20.448
- type: recall_at_5
value: 24.067
- type: recall_at_10
value: 29.242
- type: recall_at_20
value: 35.262
- type: recall_at_100
value: 51.453
- type: recall_at_1000
value: 79.87100000000001
- type: precision_at_1
value: 13.145000000000001
- type: precision_at_3
value: 6.819
- type: precision_at_5
value: 4.8149999999999995
- type: precision_at_10
value: 2.9250000000000003
- type: precision_at_20
value: 1.764
- type: precision_at_100
value: 0.515
- type: precision_at_1000
value: 0.08
- type: mrr_at_1
value: 13.1446
- type: mrr_at_3
value: 16.301
- type: mrr_at_5
value: 17.1158
- type: mrr_at_10
value: 17.802699999999998
- type: mrr_at_20
value: 18.2164
- type: mrr_at_100
value: 18.5997
- type: mrr_at_1000
value: 18.708
- type: nauc_ndcg_at_1_max
value: 54.626
- type: nauc_ndcg_at_1_std
value: 9.7213
- type: nauc_ndcg_at_1_diff1
value: 48.3128
- type: nauc_ndcg_at_3_max
value: 49.8152
- type: nauc_ndcg_at_3_std
value: 10.6486
- type: nauc_ndcg_at_3_diff1
value: 37.6318
- type: nauc_ndcg_at_5_max
value: 49.3946
- type: nauc_ndcg_at_5_std
value: 11.0498
- type: nauc_ndcg_at_5_diff1
value: 36.6375
- type: nauc_ndcg_at_10_max
value: 48.226
- type: nauc_ndcg_at_10_std
value: 11.574900000000001
- type: nauc_ndcg_at_10_diff1
value: 34.591499999999996
- type: nauc_ndcg_at_20_max
value: 47.5075
- type: nauc_ndcg_at_20_std
value: 11.9084
- type: nauc_ndcg_at_20_diff1
value: 33.475300000000004
- type: nauc_ndcg_at_100_max
value: 47.131299999999996
- type: nauc_ndcg_at_100_std
value: 12.7452
- type: nauc_ndcg_at_100_diff1
value: 32.7759
- type: nauc_ndcg_at_1000_max
value: 47.5947
- type: nauc_ndcg_at_1000_std
value: 12.570500000000001
- type: nauc_ndcg_at_1000_diff1
value: 33.3662
- type: nauc_map_at_1_max
value: 54.5764
- type: nauc_map_at_1_std
value: 9.6486
- type: nauc_map_at_1_diff1
value: 48.2862
- type: nauc_map_at_3_max
value: 50.8942
- type: nauc_map_at_3_std
value: 10.4293
- type: nauc_map_at_3_diff1
value: 39.9007
- type: nauc_map_at_5_max
value: 50.61639999999999
- type: nauc_map_at_5_std
value: 10.6779
- type: nauc_map_at_5_diff1
value: 39.2573
- type: nauc_map_at_10_max
value: 50.0815
- type: nauc_map_at_10_std
value: 10.935400000000001
- type: nauc_map_at_10_diff1
value: 38.290400000000005
- type: nauc_map_at_20_max
value: 49.8737
- type: nauc_map_at_20_std
value: 11.0391
- type: nauc_map_at_20_diff1
value: 37.9496
- type: nauc_map_at_100_max
value: 49.7948
- type: nauc_map_at_100_std
value: 11.1509
- type: nauc_map_at_100_diff1
value: 37.8322
- type: nauc_map_at_1000_max
value: 49.818
- type: nauc_map_at_1000_std
value: 11.157300000000001
- type: nauc_map_at_1000_diff1
value: 37.859500000000004
- type: nauc_recall_at_1_max
value: 54.5764
- type: nauc_recall_at_1_std
value: 9.6486
- type: nauc_recall_at_1_diff1
value: 48.2862
- type: nauc_recall_at_3_max
value: 47.1152
- type: nauc_recall_at_3_std
value: 11.1346
- type: nauc_recall_at_3_diff1
value: 32.0666
- type: nauc_recall_at_5_max
value: 46.455600000000004
- type: nauc_recall_at_5_std
value: 11.905100000000001
- type: nauc_recall_at_5_diff1
value: 30.426599999999997
- type: nauc_recall_at_10_max
value: 43.7652
- type: nauc_recall_at_10_std
value: 13.0735
- type: nauc_recall_at_10_diff1
value: 25.9008
- type: nauc_recall_at_20_max
value: 41.6091
- type: nauc_recall_at_20_std
value: 14.041200000000002
- type: nauc_recall_at_20_diff1
value: 22.7051
- type: nauc_recall_at_100_max
value: 40.0424
- type: nauc_recall_at_100_std
value: 17.8576
- type: nauc_recall_at_100_diff1
value: 19.5013
- type: nauc_recall_at_1000_max
value: 39.2051
- type: nauc_recall_at_1000_std
value: 18.9662
- type: nauc_recall_at_1000_diff1
value: 15.2009
- type: nauc_precision_at_1_max
value: 54.626
- type: nauc_precision_at_1_std
value: 9.7213
- type: nauc_precision_at_1_diff1
value: 48.3128
- type: nauc_precision_at_3_max
value: 47.1626
- type: nauc_precision_at_3_std
value: 11.1885
- type: nauc_precision_at_3_diff1
value: 32.0978
- type: nauc_precision_at_5_max
value: 46.5
- type: nauc_precision_at_5_std
value: 11.955300000000001
- type: nauc_precision_at_5_diff1
value: 30.456
- type: nauc_precision_at_10_max
value: 43.8063
- type: nauc_precision_at_10_std
value: 13.1193
- type: nauc_precision_at_10_diff1
value: 25.9284
- type: nauc_precision_at_20_max
value: 41.6532
- type: nauc_precision_at_20_std
value: 14.0865
- type: nauc_precision_at_20_diff1
value: 22.7346
- type: nauc_precision_at_100_max
value: 40.0991
- type: nauc_precision_at_100_std
value: 17.935200000000002
- type: nauc_precision_at_100_diff1
value: 19.545399999999997
- type: nauc_precision_at_1000_max
value: 39.2887
- type: nauc_precision_at_1000_std
value: 19.0859
- type: nauc_precision_at_1000_diff1
value: 15.277
- type: nauc_mrr_at_1_max
value: 54.626
- type: nauc_mrr_at_1_std
value: 9.7213
- type: nauc_mrr_at_1_diff1
value: 48.3128
- type: nauc_mrr_at_3_max
value: 50.938300000000005
- type: nauc_mrr_at_3_std
value: 10.491100000000001
- type: nauc_mrr_at_3_diff1
value: 39.927099999999996
- type: nauc_mrr_at_5_max
value: 50.6598
- type: nauc_mrr_at_5_std
value: 10.7385
- type: nauc_mrr_at_5_diff1
value: 39.2835
- type: nauc_mrr_at_10_max
value: 50.124500000000005
- type: nauc_mrr_at_10_std
value: 10.994900000000001
- type: nauc_mrr_at_10_diff1
value: 38.3166
- type: nauc_mrr_at_20_max
value: 49.9166
- type: nauc_mrr_at_20_std
value: 11.0984
- type: nauc_mrr_at_20_diff1
value: 37.9759
- type: nauc_mrr_at_100_max
value: 49.836200000000005
- type: nauc_mrr_at_100_std
value: 11.2082
- type: nauc_mrr_at_100_diff1
value: 37.8577
- type: nauc_mrr_at_1000_max
value: 49.859500000000004
- type: nauc_mrr_at_1000_std
value: 11.2147
- type: nauc_mrr_at_1000_diff1
value: 37.885000000000005
- type: main_score
value: 20.508000000000003
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-deu)
type: facebook/mlqa
config: ara-deu
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.182
- type: ndcg_at_3
value: 0.358
- type: ndcg_at_5
value: 0.457
- type: ndcg_at_10
value: 0.732
- type: ndcg_at_20
value: 1.065
- type: ndcg_at_100
value: 2.373
- type: ndcg_at_1000
value: 9.254
- type: map_at_1
value: 0.182
- type: map_at_3
value: 0.314
- type: map_at_5
value: 0.368
- type: map_at_10
value: 0.482
- type: map_at_20
value: 0.5720000000000001
- type: map_at_100
value: 0.7250000000000001
- type: map_at_1000
value: 0.889
- type: recall_at_1
value: 0.182
- type: recall_at_3
value: 0.485
- type: recall_at_5
value: 0.728
- type: recall_at_10
value: 1.5779999999999998
- type: recall_at_20
value: 2.913
- type: recall_at_100
value: 10.376000000000001
- type: recall_at_1000
value: 70.419
- type: precision_at_1
value: 0.182
- type: precision_at_3
value: 0.16199999999999998
- type: precision_at_5
value: 0.146
- type: precision_at_10
value: 0.158
- type: precision_at_20
value: 0.146
- type: precision_at_100
value: 0.104
- type: precision_at_1000
value: 0.06999999999999999
- type: mrr_at_1
value: 0.182
- type: mrr_at_3
value: 0.3135
- type: mrr_at_5
value: 0.3681
- type: mrr_at_10
value: 0.4821
- type: mrr_at_20
value: 0.5716
- type: mrr_at_100
value: 0.7255
- type: mrr_at_1000
value: 0.8887
- type: nauc_ndcg_at_1_max
value: 28.624699999999997
- type: nauc_ndcg_at_1_std
value: 6.1873
- type: nauc_ndcg_at_1_diff1
value: 53.0501
- type: nauc_ndcg_at_3_max
value: 3.8078000000000003
- type: nauc_ndcg_at_3_std
value: 2.7539000000000002
- type: nauc_ndcg_at_3_diff1
value: 22.1103
- type: nauc_ndcg_at_5_max
value: 0.6967
- type: nauc_ndcg_at_5_std
value: 1.5486
- type: nauc_ndcg_at_5_diff1
value: 11.990499999999999
- type: nauc_ndcg_at_10_max
value: 0.2519
- type: nauc_ndcg_at_10_std
value: -1.0728
- type: nauc_ndcg_at_10_diff1
value: 0.755
- type: nauc_ndcg_at_20_max
value: -1.6757000000000002
- type: nauc_ndcg_at_20_std
value: -0.3161
- type: nauc_ndcg_at_20_diff1
value: 4.1878
- type: nauc_ndcg_at_100_max
value: -2.2508
- type: nauc_ndcg_at_100_std
value: -5.1434
- type: nauc_ndcg_at_100_diff1
value: -0.15410000000000001
- type: nauc_ndcg_at_1000_max
value: -5.904
- type: nauc_ndcg_at_1000_std
value: -5.141
- type: nauc_ndcg_at_1000_diff1
value: -4.047
- type: nauc_map_at_1_max
value: 28.624699999999997
- type: nauc_map_at_1_std
value: 6.1873
- type: nauc_map_at_1_diff1
value: 53.0501
- type: nauc_map_at_3_max
value: 7.9022
- type: nauc_map_at_3_std
value: 3.8733999999999997
- type: nauc_map_at_3_diff1
value: 27.1528
- type: nauc_map_at_5_max
value: 5.4552000000000005
- type: nauc_map_at_5_std
value: 2.6903
- type: nauc_map_at_5_diff1
value: 19.6651
- type: nauc_map_at_10_max
value: 3.7626
- type: nauc_map_at_10_std
value: 0.9359
- type: nauc_map_at_10_diff1
value: 10.467799999999999
- type: nauc_map_at_20_max
value: 2.3636
- type: nauc_map_at_20_std
value: 1.0025
- type: nauc_map_at_20_diff1
value: 10.8077
- type: nauc_map_at_100_max
value: 0.5793999999999999
- type: nauc_map_at_100_std
value: -1.1226999999999998
- type: nauc_map_at_100_diff1
value: 7.180400000000001
- type: nauc_map_at_1000_max
value: -0.1581
- type: nauc_map_at_1000_std
value: -1.7341
- type: nauc_map_at_1000_diff1
value: 6.1155
- type: nauc_recall_at_1_max
value: 28.624699999999997
- type: nauc_recall_at_1_std
value: 6.1873
- type: nauc_recall_at_1_diff1
value: 53.0501
- type: nauc_recall_at_3_max
value: -3.9881
- type: nauc_recall_at_3_std
value: 0.4971
- type: nauc_recall_at_3_diff1
value: 12.523000000000001
- type: nauc_recall_at_5_max
value: -6.7618
- type: nauc_recall_at_5_std
value: -0.19449999999999998
- type: nauc_recall_at_5_diff1
value: -0.1727
- type: nauc_recall_at_10_max
value: -2.9286
- type: nauc_recall_at_10_std
value: -3.2508000000000004
- type: nauc_recall_at_10_diff1
value: -9.1922
- type: nauc_recall_at_20_max
value: -4.4579
- type: nauc_recall_at_20_std
value: -1.1248
- type: nauc_recall_at_20_diff1
value: 0.1875
- type: nauc_recall_at_100_max
value: -2.4858000000000002
- type: nauc_recall_at_100_std
value: -6.912999999999999
- type: nauc_recall_at_100_diff1
value: -2.0854
- type: nauc_recall_at_1000_max
value: -8.0511
- type: nauc_recall_at_1000_std
value: -5.1655
- type: nauc_recall_at_1000_diff1
value: -7.4412
- type: nauc_precision_at_1_max
value: 28.624699999999997
- type: nauc_precision_at_1_std
value: 6.1873
- type: nauc_precision_at_1_diff1
value: 53.0501
- type: nauc_precision_at_3_max
value: -3.9881
- type: nauc_precision_at_3_std
value: 0.4971
- type: nauc_precision_at_3_diff1
value: 12.523000000000001
- type: nauc_precision_at_5_max
value: -6.7618
- type: nauc_precision_at_5_std
value: -0.19449999999999998
- type: nauc_precision_at_5_diff1
value: -0.1727
- type: nauc_precision_at_10_max
value: -2.9286
- type: nauc_precision_at_10_std
value: -3.2508000000000004
- type: nauc_precision_at_10_diff1
value: -9.1922
- type: nauc_precision_at_20_max
value: -4.4579
- type: nauc_precision_at_20_std
value: -1.1248
- type: nauc_precision_at_20_diff1
value: 0.1875
- type: nauc_precision_at_100_max
value: -2.4858000000000002
- type: nauc_precision_at_100_std
value: -6.912999999999999
- type: nauc_precision_at_100_diff1
value: -2.0854
- type: nauc_precision_at_1000_max
value: -8.1766
- type: nauc_precision_at_1000_std
value: -5.273
- type: nauc_precision_at_1000_diff1
value: -7.5506
- type: nauc_mrr_at_1_max
value: 28.624699999999997
- type: nauc_mrr_at_1_std
value: 6.1873
- type: nauc_mrr_at_1_diff1
value: 53.0501
- type: nauc_mrr_at_3_max
value: 7.9022
- type: nauc_mrr_at_3_std
value: 3.8733999999999997
- type: nauc_mrr_at_3_diff1
value: 27.1528
- type: nauc_mrr_at_5_max
value: 5.4552000000000005
- type: nauc_mrr_at_5_std
value: 2.6903
- type: nauc_mrr_at_5_diff1
value: 19.6651
- type: nauc_mrr_at_10_max
value: 3.7626
- type: nauc_mrr_at_10_std
value: 0.9359
- type: nauc_mrr_at_10_diff1
value: 10.467799999999999
- type: nauc_mrr_at_20_max
value: 2.3636
- type: nauc_mrr_at_20_std
value: 1.0025
- type: nauc_mrr_at_20_diff1
value: 10.8077
- type: nauc_mrr_at_100_max
value: 0.5793999999999999
- type: nauc_mrr_at_100_std
value: -1.1226999999999998
- type: nauc_mrr_at_100_diff1
value: 7.180400000000001
- type: nauc_mrr_at_1000_max
value: -0.1628
- type: nauc_mrr_at_1000_std
value: -1.7382000000000002
- type: nauc_mrr_at_1000_diff1
value: 6.1114
- type: main_score
value: 0.732
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-eng)
type: facebook/mlqa
config: ara-eng
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.038
- type: ndcg_at_3
value: 0.13899999999999998
- type: ndcg_at_5
value: 0.23700000000000002
- type: ndcg_at_10
value: 0.31
- type: ndcg_at_20
value: 0.439
- type: ndcg_at_100
value: 1.061
- type: ndcg_at_1000
value: 3.857
- type: map_at_1
value: 0.038
- type: map_at_3
value: 0.109
- type: map_at_5
value: 0.163
- type: map_at_10
value: 0.193
- type: map_at_20
value: 0.22899999999999998
- type: map_at_100
value: 0.306
- type: map_at_1000
value: 0.373
- type: recall_at_1
value: 0.038
- type: recall_at_3
value: 0.22499999999999998
- type: recall_at_5
value: 0.469
- type: recall_at_10
value: 0.694
- type: recall_at_20
value: 1.2
- type: recall_at_100
value: 4.689
- type: recall_at_1000
value: 29.060000000000002
- type: precision_at_1
value: 0.038
- type: precision_at_3
value: 0.075
- type: precision_at_5
value: 0.094
- type: precision_at_10
value: 0.06899999999999999
- type: precision_at_20
value: 0.06
- type: precision_at_100
value: 0.047
- type: precision_at_1000
value: 0.029
- type: mrr_at_1
value: 0.0375
- type: mrr_at_3
value: 0.1094
- type: mrr_at_5
value: 0.1629
- type: mrr_at_10
value: 0.19319999999999998
- type: mrr_at_20
value: 0.2287
- type: mrr_at_100
value: 0.3061
- type: mrr_at_1000
value: 0.373
- type: nauc_ndcg_at_1_max
value: 25.0247
- type: nauc_ndcg_at_1_std
value: 100.0
- type: nauc_ndcg_at_1_diff1
value: 21.269099999999998
- type: nauc_ndcg_at_3_max
value: -2.6221
- type: nauc_ndcg_at_3_std
value: 58.781499999999994
- type: nauc_ndcg_at_3_diff1
value: -8.5801
- type: nauc_ndcg_at_5_max
value: 11.3108
- type: nauc_ndcg_at_5_std
value: 52.609300000000005
- type: nauc_ndcg_at_5_diff1
value: -1.0551
- type: nauc_ndcg_at_10_max
value: 16.031000000000002
- type: nauc_ndcg_at_10_std
value: 45.3023
- type: nauc_ndcg_at_10_diff1
value: 5.7653
- type: nauc_ndcg_at_20_max
value: 9.3925
- type: nauc_ndcg_at_20_std
value: 30.537799999999997
- type: nauc_ndcg_at_20_diff1
value: 0.9148999999999999
- type: nauc_ndcg_at_100_max
value: 2.9912
- type: nauc_ndcg_at_100_std
value: 18.066499999999998
- type: nauc_ndcg_at_100_diff1
value: -4.87
- type: nauc_ndcg_at_1000_max
value: 3.5232
- type: nauc_ndcg_at_1000_std
value: 9.6114
- type: nauc_ndcg_at_1000_diff1
value: -2.5008
- type: nauc_map_at_1_max
value: 25.0247
- type: nauc_map_at_1_std
value: 100.0
- type: nauc_map_at_1_diff1
value: 21.269099999999998
- type: nauc_map_at_3_max
value: -0.7981
- type: nauc_map_at_3_std
value: 64.2546
- type: nauc_map_at_3_diff1
value: -6.6277
- type: nauc_map_at_5_max
value: 9.6297
- type: nauc_map_at_5_std
value: 57.415000000000006
- type: nauc_map_at_5_diff1
value: -1.5141
- type: nauc_map_at_10_max
value: 12.7673
- type: nauc_map_at_10_std
value: 51.8795
- type: nauc_map_at_10_diff1
value: 3.0726
- type: nauc_map_at_20_max
value: 9.911399999999999
- type: nauc_map_at_20_std
value: 43.0182
- type: nauc_map_at_20_diff1
value: 1.046
- type: nauc_map_at_100_max
value: 6.8581
- type: nauc_map_at_100_std
value: 35.2906
- type: nauc_map_at_100_diff1
value: -1.5436999999999999
- type: nauc_map_at_1000_max
value: 6.7394
- type: nauc_map_at_1000_std
value: 31.183
- type: nauc_map_at_1000_diff1
value: -1.4350999999999998
- type: nauc_recall_at_1_max
value: 25.0247
- type: nauc_recall_at_1_std
value: 100.0
- type: nauc_recall_at_1_diff1
value: 21.269099999999998
- type: nauc_recall_at_3_max
value: -5.088
- type: nauc_recall_at_3_std
value: 50.689099999999996
- type: nauc_recall_at_3_diff1
value: -11.2155
- type: nauc_recall_at_5_max
value: 13.6279
- type: nauc_recall_at_5_std
value: 47.4024
- type: nauc_recall_at_5_diff1
value: -0.1403
- type: nauc_recall_at_10_max
value: 19.7762
- type: nauc_recall_at_10_std
value: 38.9053
- type: nauc_recall_at_10_diff1
value: 9.001199999999999
- type: nauc_recall_at_20_max
value: 8.4134
- type: nauc_recall_at_20_std
value: 20.3737
- type: nauc_recall_at_20_diff1
value: 0.4812
- type: nauc_recall_at_100_max
value: 1.1665999999999999
- type: nauc_recall_at_100_std
value: 11.3664
- type: nauc_recall_at_100_diff1
value: -6.5212
- type: nauc_recall_at_1000_max
value: 2.8707
- type: nauc_recall_at_1000_std
value: 5.8485000000000005
- type: nauc_recall_at_1000_diff1
value: -2.4025000000000003
- type: nauc_precision_at_1_max
value: 25.0247
- type: nauc_precision_at_1_std
value: 100.0
- type: nauc_precision_at_1_diff1
value: 21.269099999999998
- type: nauc_precision_at_3_max
value: -5.088
- type: nauc_precision_at_3_std
value: 50.689099999999996
- type: nauc_precision_at_3_diff1
value: -11.2155
- type: nauc_precision_at_5_max
value: 13.6279
- type: nauc_precision_at_5_std
value: 47.4024
- type: nauc_precision_at_5_diff1
value: -0.1403
- type: nauc_precision_at_10_max
value: 19.7762
- type: nauc_precision_at_10_std
value: 38.9053
- type: nauc_precision_at_10_diff1
value: 9.001199999999999
- type: nauc_precision_at_20_max
value: 8.4134
- type: nauc_precision_at_20_std
value: 20.3737
- type: nauc_precision_at_20_diff1
value: 0.4812
- type: nauc_precision_at_100_max
value: 1.1665999999999999
- type: nauc_precision_at_100_std
value: 11.3664
- type: nauc_precision_at_100_diff1
value: -6.5212
- type: nauc_precision_at_1000_max
value: 2.8549
- type: nauc_precision_at_1000_std
value: 5.8442
- type: nauc_precision_at_1000_diff1
value: -2.3865999999999996
- type: nauc_mrr_at_1_max
value: 25.0247
- type: nauc_mrr_at_1_std
value: 100.0
- type: nauc_mrr_at_1_diff1
value: 21.269099999999998
- type: nauc_mrr_at_3_max
value: -0.7981
- type: nauc_mrr_at_3_std
value: 64.2546
- type: nauc_mrr_at_3_diff1
value: -6.6277
- type: nauc_mrr_at_5_max
value: 9.6297
- type: nauc_mrr_at_5_std
value: 57.415000000000006
- type: nauc_mrr_at_5_diff1
value: -1.5141
- type: nauc_mrr_at_10_max
value: 12.7673
- type: nauc_mrr_at_10_std
value: 51.8795
- type: nauc_mrr_at_10_diff1
value: 3.0726
- type: nauc_mrr_at_20_max
value: 9.911399999999999
- type: nauc_mrr_at_20_std
value: 43.0182
- type: nauc_mrr_at_20_diff1
value: 1.046
- type: nauc_mrr_at_100_max
value: 6.8581
- type: nauc_mrr_at_100_std
value: 35.2906
- type: nauc_mrr_at_100_diff1
value: -1.5436999999999999
- type: nauc_mrr_at_1000_max
value: 6.7368999999999994
- type: nauc_mrr_at_1000_std
value: 31.181199999999997
- type: nauc_mrr_at_1000_diff1
value: -1.4328
- type: main_score
value: 0.31
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-spa)
type: facebook/mlqa
config: ara-spa
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.051000000000000004
- type: ndcg_at_3
value: 0.19
- type: ndcg_at_5
value: 0.22899999999999998
- type: ndcg_at_10
value: 0.43
- type: ndcg_at_20
value: 0.668
- type: ndcg_at_100
value: 1.687
- type: ndcg_at_1000
value: 7.878
- type: map_at_1
value: 0.051000000000000004
- type: map_at_3
value: 0.152
- type: map_at_5
value: 0.172
- type: map_at_10
value: 0.258
- type: map_at_20
value: 0.32
- type: map_at_100
value: 0.44400000000000006
- type: map_at_1000
value: 0.592
- type: recall_at_1
value: 0.051000000000000004
- type: recall_at_3
value: 0.303
- type: recall_at_5
value: 0.404
- type: recall_at_10
value: 1.011
- type: recall_at_20
value: 1.9720000000000002
- type: recall_at_100
value: 7.735
- type: recall_at_1000
value: 61.729
- type: precision_at_1
value: 0.051000000000000004
- type: precision_at_3
value: 0.101
- type: precision_at_5
value: 0.08099999999999999
- type: precision_at_10
value: 0.101
- type: precision_at_20
value: 0.099
- type: precision_at_100
value: 0.077
- type: precision_at_1000
value: 0.062
- type: mrr_at_1
value: 0.050600000000000006
- type: mrr_at_3
value: 0.1517
- type: mrr_at_5
value: 0.1719
- type: mrr_at_10
value: 0.2578
- type: mrr_at_20
value: 0.3199
- type: mrr_at_100
value: 0.44409999999999994
- type: mrr_at_1000
value: 0.5918
- type: nauc_ndcg_at_1_max
value: 66.2097
- type: nauc_ndcg_at_1_std
value: 66.2097
- type: nauc_ndcg_at_1_diff1
value: 32.419399999999996
- type: nauc_ndcg_at_3_max
value: -3.5048000000000004
- type: nauc_ndcg_at_3_std
value: -1.1603
- type: nauc_ndcg_at_3_diff1
value: 4.6897
- type: nauc_ndcg_at_5_max
value: -9.5677
- type: nauc_ndcg_at_5_std
value: 7.449999999999999
- type: nauc_ndcg_at_5_diff1
value: -5.919300000000001
- type: nauc_ndcg_at_10_max
value: -4.8053
- type: nauc_ndcg_at_10_std
value: 13.3414
- type: nauc_ndcg_at_10_diff1
value: -5.1068
- type: nauc_ndcg_at_20_max
value: -2.2846
- type: nauc_ndcg_at_20_std
value: 7.589700000000001
- type: nauc_ndcg_at_20_diff1
value: -2.1516
- type: nauc_ndcg_at_100_max
value: 1.1325999999999998
- type: nauc_ndcg_at_100_std
value: 3.0970999999999997
- type: nauc_ndcg_at_100_diff1
value: 1.9342000000000001
- type: nauc_ndcg_at_1000_max
value: 0.7024
- type: nauc_ndcg_at_1000_std
value: 4.9341
- type: nauc_ndcg_at_1000_diff1
value: 2.2851
- type: nauc_map_at_1_max
value: 66.2097
- type: nauc_map_at_1_std
value: 66.2097
- type: nauc_map_at_1_diff1
value: 32.419399999999996
- type: nauc_map_at_3_max
value: 1.5827
- type: nauc_map_at_3_std
value: 3.7415
- type: nauc_map_at_3_diff1
value: 6.6845
- type: nauc_map_at_5_max
value: -3.1972
- type: nauc_map_at_5_std
value: 9.103
- type: nauc_map_at_5_diff1
value: -0.8668
- type: nauc_map_at_10_max
value: -2.1843000000000004
- type: nauc_map_at_10_std
value: 12.824399999999999
- type: nauc_map_at_10_diff1
value: -2.0369
- type: nauc_map_at_20_max
value: -1.4794
- type: nauc_map_at_20_std
value: 9.4729
- type: nauc_map_at_20_diff1
value: -0.8819
- type: nauc_map_at_100_max
value: -0.0817
- type: nauc_map_at_100_std
value: 7.3338
- type: nauc_map_at_100_diff1
value: 1.1033
- type: nauc_map_at_1000_max
value: -0.4769
- type: nauc_map_at_1000_std
value: 6.927
- type: nauc_map_at_1000_diff1
value: 0.9951
- type: nauc_recall_at_1_max
value: 66.2097
- type: nauc_recall_at_1_std
value: 66.2097
- type: nauc_recall_at_1_diff1
value: 32.419399999999996
- type: nauc_recall_at_3_max
value: -10.7387
- type: nauc_recall_at_3_std
value: -8.126999999999999
- type: nauc_recall_at_3_diff1
value: 1.8596000000000001
- type: nauc_recall_at_5_max
value: -17.8157
- type: nauc_recall_at_5_std
value: 6.2334
- type: nauc_recall_at_5_diff1
value: -12.9807
- type: nauc_recall_at_10_max
value: -6.397899999999999
- type: nauc_recall_at_10_std
value: 14.4229
- type: nauc_recall_at_10_diff1
value: -7.5951
- type: nauc_recall_at_20_max
value: -1.9718
- type: nauc_recall_at_20_std
value: 6.3748
- type: nauc_recall_at_20_diff1
value: -2.4903999999999997
- type: nauc_recall_at_100_max
value: 1.9014
- type: nauc_recall_at_100_std
value: 1.3683
- type: nauc_recall_at_100_diff1
value: 2.3786
- type: nauc_recall_at_1000_max
value: 1.6191
- type: nauc_recall_at_1000_std
value: 5.3927000000000005
- type: nauc_recall_at_1000_diff1
value: 3.0677
- type: nauc_precision_at_1_max
value: 66.2097
- type: nauc_precision_at_1_std
value: 66.2097
- type: nauc_precision_at_1_diff1
value: 32.419399999999996
- type: nauc_precision_at_3_max
value: -10.7387
- type: nauc_precision_at_3_std
value: -8.126999999999999
- type: nauc_precision_at_3_diff1
value: 1.8596000000000001
- type: nauc_precision_at_5_max
value: -17.8157
- type: nauc_precision_at_5_std
value: 6.2334
- type: nauc_precision_at_5_diff1
value: -12.9807
- type: nauc_precision_at_10_max
value: -6.397899999999999
- type: nauc_precision_at_10_std
value: 14.4229
- type: nauc_precision_at_10_diff1
value: -7.5951
- type: nauc_precision_at_20_max
value: -1.9718
- type: nauc_precision_at_20_std
value: 6.3748
- type: nauc_precision_at_20_diff1
value: -2.4903999999999997
- type: nauc_precision_at_100_max
value: 1.9014
- type: nauc_precision_at_100_std
value: 1.3683
- type: nauc_precision_at_100_diff1
value: 2.3786
- type: nauc_precision_at_1000_max
value: 1.6191
- type: nauc_precision_at_1000_std
value: 5.3927000000000005
- type: nauc_precision_at_1000_diff1
value: 3.0677
- type: nauc_mrr_at_1_max
value: 66.2097
- type: nauc_mrr_at_1_std
value: 66.2097
- type: nauc_mrr_at_1_diff1
value: 32.419399999999996
- type: nauc_mrr_at_3_max
value: 1.5827
- type: nauc_mrr_at_3_std
value: 3.7415
- type: nauc_mrr_at_3_diff1
value: 6.6845
- type: nauc_mrr_at_5_max
value: -3.1972
- type: nauc_mrr_at_5_std
value: 9.103
- type: nauc_mrr_at_5_diff1
value: -0.8668
- type: nauc_mrr_at_10_max
value: -2.1843000000000004
- type: nauc_mrr_at_10_std
value: 12.824399999999999
- type: nauc_mrr_at_10_diff1
value: -2.0369
- type: nauc_mrr_at_20_max
value: -1.4794
- type: nauc_mrr_at_20_std
value: 9.4729
- type: nauc_mrr_at_20_diff1
value: -0.8819
- type: nauc_mrr_at_100_max
value: -0.0817
- type: nauc_mrr_at_100_std
value: 7.3338
- type: nauc_mrr_at_100_diff1
value: 1.1033
- type: nauc_mrr_at_1000_max
value: -0.4769
- type: nauc_mrr_at_1000_std
value: 6.927
- type: nauc_mrr_at_1000_diff1
value: 0.9951
- type: main_score
value: 0.43
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-hin)
type: facebook/mlqa
config: ara-hin
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.218
- type: ndcg_at_3
value: 0.322
- type: ndcg_at_5
value: 0.38999999999999996
- type: ndcg_at_10
value: 0.7230000000000001
- type: ndcg_at_20
value: 1.004
- type: ndcg_at_100
value: 2.493
- type: ndcg_at_1000
value: 9.104
- type: map_at_1
value: 0.218
- type: map_at_3
value: 0.3
- type: map_at_5
value: 0.33899999999999997
- type: map_at_10
value: 0.475
- type: map_at_20
value: 0.547
- type: map_at_100
value: 0.7250000000000001
- type: map_at_1000
value: 0.8829999999999999
- type: recall_at_1
value: 0.218
- type: recall_at_3
value: 0.382
- type: recall_at_5
value: 0.5459999999999999
- type: recall_at_10
value: 1.584
- type: recall_at_20
value: 2.731
- type: recall_at_100
value: 11.196
- type: recall_at_1000
value: 68.815
- type: precision_at_1
value: 0.218
- type: precision_at_3
value: 0.127
- type: precision_at_5
value: 0.109
- type: precision_at_10
value: 0.158
- type: precision_at_20
value: 0.13699999999999998
- type: precision_at_100
value: 0.11199999999999999
- type: precision_at_1000
value: 0.06899999999999999
- type: mrr_at_1
value: 0.2185
- type: mrr_at_3
value: 0.3004
- type: mrr_at_5
value: 0.3386
- type: mrr_at_10
value: 0.4749
- type: mrr_at_20
value: 0.547
- type: mrr_at_100
value: 0.7244999999999999
- type: mrr_at_1000
value: 0.8832
- type: nauc_ndcg_at_1_max
value: 12.828800000000001
- type: nauc_ndcg_at_1_std
value: 12.828800000000001
- type: nauc_ndcg_at_1_diff1
value: 11.947199999999999
- type: nauc_ndcg_at_3_max
value: 12.5981
- type: nauc_ndcg_at_3_std
value: 21.1562
- type: nauc_ndcg_at_3_diff1
value: 9.2582
- type: nauc_ndcg_at_5_max
value: 14.901800000000001
- type: nauc_ndcg_at_5_std
value: 18.6988
- type: nauc_ndcg_at_5_diff1
value: 14.119000000000002
- type: nauc_ndcg_at_10_max
value: -0.8004000000000001
- type: nauc_ndcg_at_10_std
value: 7.9477
- type: nauc_ndcg_at_10_diff1
value: 2.8608000000000002
- type: nauc_ndcg_at_20_max
value: 0.4824
- type: nauc_ndcg_at_20_std
value: 11.9344
- type: nauc_ndcg_at_20_diff1
value: -4.9617
- type: nauc_ndcg_at_100_max
value: 3.257
- type: nauc_ndcg_at_100_std
value: 3.4608
- type: nauc_ndcg_at_100_diff1
value: 5.3857
- type: nauc_ndcg_at_1000_max
value: -2.4372000000000003
- type: nauc_ndcg_at_1000_std
value: -1.0752
- type: nauc_ndcg_at_1000_diff1
value: 2.1543
- type: nauc_map_at_1_max
value: 12.828800000000001
- type: nauc_map_at_1_std
value: 12.828800000000001
- type: nauc_map_at_1_diff1
value: 11.947199999999999
- type: nauc_map_at_3_max
value: 12.6329
- type: nauc_map_at_3_std
value: 19.8994
- type: nauc_map_at_3_diff1
value: 9.664
- type: nauc_map_at_5_max
value: 14.0908
- type: nauc_map_at_5_std
value: 18.2199
- type: nauc_map_at_5_diff1
value: 12.865699999999999
- type: nauc_map_at_10_max
value: 4.3515999999999995
- type: nauc_map_at_10_std
value: 11.3301
- type: nauc_map_at_10_diff1
value: 6.399000000000001
- type: nauc_map_at_20_max
value: 3.9482999999999997
- type: nauc_map_at_20_std
value: 12.4301
- type: nauc_map_at_20_diff1
value: 2.2731000000000003
- type: nauc_map_at_100_max
value: 4.5962000000000005
- type: nauc_map_at_100_std
value: 8.9138
- type: nauc_map_at_100_diff1
value: 4.7346
- type: nauc_map_at_1000_max
value: 3.7624999999999997
- type: nauc_map_at_1000_std
value: 7.8308
- type: nauc_map_at_1000_diff1
value: 4.3517
- type: nauc_recall_at_1_max
value: 12.828800000000001
- type: nauc_recall_at_1_std
value: 12.828800000000001
- type: nauc_recall_at_1_diff1
value: 11.947199999999999
- type: nauc_recall_at_3_max
value: 12.520999999999999
- type: nauc_recall_at_3_std
value: 23.9397
- type: nauc_recall_at_3_diff1
value: 8.3594
- type: nauc_recall_at_5_max
value: 16.5653
- type: nauc_recall_at_5_std
value: 19.4884
- type: nauc_recall_at_5_diff1
value: 16.6947
- type: nauc_recall_at_10_max
value: -6.5468
- type: nauc_recall_at_10_std
value: 4.1849
- type: nauc_recall_at_10_diff1
value: -1.2863
- type: nauc_recall_at_20_max
value: -1.7106
- type: nauc_recall_at_20_std
value: 12.2516
- type: nauc_recall_at_20_diff1
value: -11.3388
- type: nauc_recall_at_100_max
value: 3.1510000000000002
- type: nauc_recall_at_100_std
value: 1.1705
- type: nauc_recall_at_100_diff1
value: 6.681900000000001
- type: nauc_recall_at_1000_max
value: -6.5283999999999995
- type: nauc_recall_at_1000_std
value: -5.6811
- type: nauc_recall_at_1000_diff1
value: 0.9051999999999999
- type: nauc_precision_at_1_max
value: 12.828800000000001
- type: nauc_precision_at_1_std
value: 12.828800000000001
- type: nauc_precision_at_1_diff1
value: 11.947199999999999
- type: nauc_precision_at_3_max
value: 12.520999999999999
- type: nauc_precision_at_3_std
value: 23.9397
- type: nauc_precision_at_3_diff1
value: 8.3594
- type: nauc_precision_at_5_max
value: 16.5653
- type: nauc_precision_at_5_std
value: 19.4884
- type: nauc_precision_at_5_diff1
value: 16.6947
- type: nauc_precision_at_10_max
value: -6.5468
- type: nauc_precision_at_10_std
value: 4.1849
- type: nauc_precision_at_10_diff1
value: -1.2863
- type: nauc_precision_at_20_max
value: -1.7106
- type: nauc_precision_at_20_std
value: 12.2516
- type: nauc_precision_at_20_diff1
value: -11.3388
- type: nauc_precision_at_100_max
value: 3.1510000000000002
- type: nauc_precision_at_100_std
value: 1.1705
- type: nauc_precision_at_100_diff1
value: 6.681900000000001
- type: nauc_precision_at_1000_max
value: -6.5283999999999995
- type: nauc_precision_at_1000_std
value: -5.6811
- type: nauc_precision_at_1000_diff1
value: 0.9051999999999999
- type: nauc_mrr_at_1_max
value: 12.828800000000001
- type: nauc_mrr_at_1_std
value: 12.828800000000001
- type: nauc_mrr_at_1_diff1
value: 11.947199999999999
- type: nauc_mrr_at_3_max
value: 12.6329
- type: nauc_mrr_at_3_std
value: 19.8994
- type: nauc_mrr_at_3_diff1
value: 9.664
- type: nauc_mrr_at_5_max
value: 14.0908
- type: nauc_mrr_at_5_std
value: 18.2199
- type: nauc_mrr_at_5_diff1
value: 12.865699999999999
- type: nauc_mrr_at_10_max
value: 4.3515999999999995
- type: nauc_mrr_at_10_std
value: 11.3301
- type: nauc_mrr_at_10_diff1
value: 6.399000000000001
- type: nauc_mrr_at_20_max
value: 3.9482999999999997
- type: nauc_mrr_at_20_std
value: 12.4301
- type: nauc_mrr_at_20_diff1
value: 2.2731000000000003
- type: nauc_mrr_at_100_max
value: 4.5962000000000005
- type: nauc_mrr_at_100_std
value: 8.9138
- type: nauc_mrr_at_100_diff1
value: 4.7346
- type: nauc_mrr_at_1000_max
value: 3.7624999999999997
- type: nauc_mrr_at_1000_std
value: 7.8308
- type: nauc_mrr_at_1000_diff1
value: 4.3517
- type: main_score
value: 0.7230000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-vie)
type: facebook/mlqa
config: ara-vie
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.098
- type: ndcg_at_3
value: 0.22
- type: ndcg_at_5
value: 0.304
- type: ndcg_at_10
value: 0.46499999999999997
- type: ndcg_at_20
value: 0.673
- type: ndcg_at_100
value: 1.469
- type: ndcg_at_1000
value: 7.327999999999999
- type: map_at_1
value: 0.098
- type: map_at_3
value: 0.179
- type: map_at_5
value: 0.22799999999999998
- type: map_at_10
value: 0.296
- type: map_at_20
value: 0.35300000000000004
- type: map_at_100
value: 0.44799999999999995
- type: map_at_1000
value: 0.584
- type: recall_at_1
value: 0.098
- type: recall_at_3
value: 0.34199999999999997
- type: recall_at_5
value: 0.5369999999999999
- type: recall_at_10
value: 1.026
- type: recall_at_20
value: 1.856
- type: recall_at_100
value: 6.351
- type: recall_at_1000
value: 57.694
- type: precision_at_1
value: 0.098
- type: precision_at_3
value: 0.11399999999999999
- type: precision_at_5
value: 0.107
- type: precision_at_10
value: 0.10300000000000001
- type: precision_at_20
value: 0.093
- type: precision_at_100
value: 0.064
- type: precision_at_1000
value: 0.058
- type: mrr_at_1
value: 0.0977
- type: mrr_at_3
value: 0.1791
- type: mrr_at_5
value: 0.22799999999999998
- type: mrr_at_10
value: 0.29650000000000004
- type: mrr_at_20
value: 0.3525
- type: mrr_at_100
value: 0.4483
- type: mrr_at_1000
value: 0.5842
- type: nauc_ndcg_at_1_max
value: -39.0297
- type: nauc_ndcg_at_1_std
value: -45.7382
- type: nauc_ndcg_at_1_diff1
value: -8.7843
- type: nauc_ndcg_at_3_max
value: -24.9691
- type: nauc_ndcg_at_3_std
value: -11.2432
- type: nauc_ndcg_at_3_diff1
value: -27.354
- type: nauc_ndcg_at_5_max
value: -22.1604
- type: nauc_ndcg_at_5_std
value: -11.8447
- type: nauc_ndcg_at_5_diff1
value: -6.9122
- type: nauc_ndcg_at_10_max
value: -23.735
- type: nauc_ndcg_at_10_std
value: -15.4924
- type: nauc_ndcg_at_10_diff1
value: -10.152999999999999
- type: nauc_ndcg_at_20_max
value: -20.741699999999998
- type: nauc_ndcg_at_20_std
value: -13.452300000000001
- type: nauc_ndcg_at_20_diff1
value: -12.496599999999999
- type: nauc_ndcg_at_100_max
value: -10.9657
- type: nauc_ndcg_at_100_std
value: -8.015500000000001
- type: nauc_ndcg_at_100_diff1
value: -4.9342999999999995
- type: nauc_ndcg_at_1000_max
value: -7.3108
- type: nauc_ndcg_at_1000_std
value: -7.736800000000001
- type: nauc_ndcg_at_1000_diff1
value: -5.5809
- type: nauc_map_at_1_max
value: -39.0297
- type: nauc_map_at_1_std
value: -45.7382
- type: nauc_map_at_1_diff1
value: -8.7843
- type: nauc_map_at_3_max
value: -27.5256
- type: nauc_map_at_3_std
value: -17.515
- type: nauc_map_at_3_diff1
value: -23.9777
- type: nauc_map_at_5_max
value: -24.8037
- type: nauc_map_at_5_std
value: -16.636699999999998
- type: nauc_map_at_5_diff1
value: -8.8785
- type: nauc_map_at_10_max
value: -25.373800000000003
- type: nauc_map_at_10_std
value: -17.8539
- type: nauc_map_at_10_diff1
value: -11.072899999999999
- type: nauc_map_at_20_max
value: -24.0998
- type: nauc_map_at_20_std
value: -16.9043
- type: nauc_map_at_20_diff1
value: -12.5078
- type: nauc_map_at_100_max
value: -19.8743
- type: nauc_map_at_100_std
value: -14.344299999999999
- type: nauc_map_at_100_diff1
value: -9.7229
- type: nauc_map_at_1000_max
value: -17.7073
- type: nauc_map_at_1000_std
value: -13.0328
- type: nauc_map_at_1000_diff1
value: -9.25
- type: nauc_recall_at_1_max
value: -39.0297
- type: nauc_recall_at_1_std
value: -45.7382
- type: nauc_recall_at_1_diff1
value: -8.7843
- type: nauc_recall_at_3_max
value: -20.951800000000002
- type: nauc_recall_at_3_std
value: -1.3875
- type: nauc_recall_at_3_diff1
value: -32.6596
- type: nauc_recall_at_5_max
value: -18.723300000000002
- type: nauc_recall_at_5_std
value: -5.7615
- type: nauc_recall_at_5_diff1
value: -3.8796999999999997
- type: nauc_recall_at_10_max
value: -22.3454
- type: nauc_recall_at_10_std
value: -13.831199999999999
- type: nauc_recall_at_10_diff1
value: -9.0449
- type: nauc_recall_at_20_max
value: -17.8615
- type: nauc_recall_at_20_std
value: -10.921899999999999
- type: nauc_recall_at_20_diff1
value: -12.389100000000001
- type: nauc_recall_at_100_max
value: -6.7801
- type: nauc_recall_at_100_std
value: -5.249899999999999
- type: nauc_recall_at_100_diff1
value: -2.3929
- type: nauc_recall_at_1000_max
value: -5.3346
- type: nauc_recall_at_1000_std
value: -7.7999
- type: nauc_recall_at_1000_diff1
value: -5.005
- type: nauc_precision_at_1_max
value: -39.0297
- type: nauc_precision_at_1_std
value: -45.7382
- type: nauc_precision_at_1_diff1
value: -8.7843
- type: nauc_precision_at_3_max
value: -20.951800000000002
- type: nauc_precision_at_3_std
value: -1.3875
- type: nauc_precision_at_3_diff1
value: -32.6596
- type: nauc_precision_at_5_max
value: -18.723300000000002
- type: nauc_precision_at_5_std
value: -5.7615
- type: nauc_precision_at_5_diff1
value: -3.8796999999999997
- type: nauc_precision_at_10_max
value: -22.3454
- type: nauc_precision_at_10_std
value: -13.831199999999999
- type: nauc_precision_at_10_diff1
value: -9.0449
- type: nauc_precision_at_20_max
value: -17.8615
- type: nauc_precision_at_20_std
value: -10.921899999999999
- type: nauc_precision_at_20_diff1
value: -12.389100000000001
- type: nauc_precision_at_100_max
value: -6.7801
- type: nauc_precision_at_100_std
value: -5.249899999999999
- type: nauc_precision_at_100_diff1
value: -2.3929
- type: nauc_precision_at_1000_max
value: -5.3346
- type: nauc_precision_at_1000_std
value: -7.7999
- type: nauc_precision_at_1000_diff1
value: -5.005
- type: nauc_mrr_at_1_max
value: -39.0297
- type: nauc_mrr_at_1_std
value: -45.7382
- type: nauc_mrr_at_1_diff1
value: -8.7843
- type: nauc_mrr_at_3_max
value: -27.5256
- type: nauc_mrr_at_3_std
value: -17.515
- type: nauc_mrr_at_3_diff1
value: -23.9777
- type: nauc_mrr_at_5_max
value: -24.8037
- type: nauc_mrr_at_5_std
value: -16.636699999999998
- type: nauc_mrr_at_5_diff1
value: -8.8785
- type: nauc_mrr_at_10_max
value: -25.373800000000003
- type: nauc_mrr_at_10_std
value: -17.8539
- type: nauc_mrr_at_10_diff1
value: -11.072899999999999
- type: nauc_mrr_at_20_max
value: -24.0998
- type: nauc_mrr_at_20_std
value: -16.9043
- type: nauc_mrr_at_20_diff1
value: -12.5078
- type: nauc_mrr_at_100_max
value: -19.8743
- type: nauc_mrr_at_100_std
value: -14.344299999999999
- type: nauc_mrr_at_100_diff1
value: -9.7229
- type: nauc_mrr_at_1000_max
value: -17.7073
- type: nauc_mrr_at_1000_std
value: -13.0328
- type: nauc_mrr_at_1000_diff1
value: -9.25
- type: main_score
value: 0.46499999999999997
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ara-zho)
type: facebook/mlqa
config: ara-zho
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.105
- type: ndcg_at_3
value: 0.197
- type: ndcg_at_5
value: 0.28200000000000003
- type: ndcg_at_10
value: 0.45799999999999996
- type: ndcg_at_20
value: 0.695
- type: ndcg_at_100
value: 1.595
- type: ndcg_at_1000
value: 7.693
- type: map_at_1
value: 0.105
- type: map_at_3
value: 0.174
- type: map_at_5
value: 0.22100000000000003
- type: map_at_10
value: 0.28800000000000003
- type: map_at_20
value: 0.35200000000000004
- type: map_at_100
value: 0.455
- type: map_at_1000
value: 0.5930000000000001
- type: recall_at_1
value: 0.105
- type: recall_at_3
value: 0.262
- type: recall_at_5
value: 0.471
- type: recall_at_10
value: 1.046
- type: recall_at_20
value: 1.9869999999999999
- type: recall_at_100
value: 7.165000000000001
- type: recall_at_1000
value: 60.826
- type: precision_at_1
value: 0.105
- type: precision_at_3
value: 0.087
- type: precision_at_5
value: 0.094
- type: precision_at_10
value: 0.105
- type: precision_at_20
value: 0.099
- type: precision_at_100
value: 0.07200000000000001
- type: precision_at_1000
value: 0.061
- type: mrr_at_1
value: 0.1046
- type: mrr_at_3
value: 0.1743
- type: mrr_at_5
value: 0.22139999999999999
- type: mrr_at_10
value: 0.28809999999999997
- type: mrr_at_20
value: 0.3525
- type: mrr_at_100
value: 0.45510000000000006
- type: mrr_at_1000
value: 0.5931
- type: nauc_ndcg_at_1_max
value: 54.9196
- type: nauc_ndcg_at_1_std
value: 29.255399999999998
- type: nauc_ndcg_at_1_diff1
value: 83.0875
- type: nauc_ndcg_at_3_max
value: 55.1068
- type: nauc_ndcg_at_3_std
value: 43.5827
- type: nauc_ndcg_at_3_diff1
value: 65.4072
- type: nauc_ndcg_at_5_max
value: 60.8846
- type: nauc_ndcg_at_5_std
value: 53.4801
- type: nauc_ndcg_at_5_diff1
value: 52.855700000000006
- type: nauc_ndcg_at_10_max
value: 42.187000000000005
- type: nauc_ndcg_at_10_std
value: 41.0796
- type: nauc_ndcg_at_10_diff1
value: 31.4853
- type: nauc_ndcg_at_20_max
value: 39.556599999999996
- type: nauc_ndcg_at_20_std
value: 39.8692
- type: nauc_ndcg_at_20_diff1
value: 28.9452
- type: nauc_ndcg_at_100_max
value: 20.7679
- type: nauc_ndcg_at_100_std
value: 23.0806
- type: nauc_ndcg_at_100_diff1
value: 15.4211
- type: nauc_ndcg_at_1000_max
value: 16.6114
- type: nauc_ndcg_at_1000_std
value: 16.4112
- type: nauc_ndcg_at_1000_diff1
value: 10.213700000000001
- type: nauc_map_at_1_max
value: 54.9196
- type: nauc_map_at_1_std
value: 29.255399999999998
- type: nauc_map_at_1_diff1
value: 83.0875
- type: nauc_map_at_3_max
value: 57.2075
- type: nauc_map_at_3_std
value: 43.4043
- type: nauc_map_at_3_diff1
value: 69.78529999999999
- type: nauc_map_at_5_max
value: 60.711999999999996
- type: nauc_map_at_5_std
value: 50.112
- type: nauc_map_at_5_diff1
value: 60.0604
- type: nauc_map_at_10_max
value: 49.7578
- type: nauc_map_at_10_std
value: 43.871300000000005
- type: nauc_map_at_10_diff1
value: 45.129599999999996
- type: nauc_map_at_20_max
value: 46.7772
- type: nauc_map_at_20_std
value: 43.0928
- type: nauc_map_at_20_diff1
value: 40.8293
- type: nauc_map_at_100_max
value: 37.595299999999995
- type: nauc_map_at_100_std
value: 35.288199999999996
- type: nauc_map_at_100_diff1
value: 32.1313
- type: nauc_map_at_1000_max
value: 34.822199999999995
- type: nauc_map_at_1000_std
value: 32.6604
- type: nauc_map_at_1000_diff1
value: 29.493599999999997
- type: nauc_recall_at_1_max
value: 54.9196
- type: nauc_recall_at_1_std
value: 29.255399999999998
- type: nauc_recall_at_1_diff1
value: 83.0875
- type: nauc_recall_at_3_max
value: 50.4794
- type: nauc_recall_at_3_std
value: 43.4043
- type: nauc_recall_at_3_diff1
value: 56.4831
- type: nauc_recall_at_5_max
value: 61.213499999999996
- type: nauc_recall_at_5_std
value: 58.540099999999995
- type: nauc_recall_at_5_diff1
value: 42.0099
- type: nauc_recall_at_10_max
value: 33.8003
- type: nauc_recall_at_10_std
value: 37.2919
- type: nauc_recall_at_10_diff1
value: 17.9128
- type: nauc_recall_at_20_max
value: 34.3856
- type: nauc_recall_at_20_std
value: 36.9134
- type: nauc_recall_at_20_diff1
value: 21.3988
- type: nauc_recall_at_100_max
value: 14.2024
- type: nauc_recall_at_100_std
value: 17.9803
- type: nauc_recall_at_100_diff1
value: 10.1473
- type: nauc_recall_at_1000_max
value: 12.4813
- type: nauc_recall_at_1000_std
value: 11.7174
- type: nauc_recall_at_1000_diff1
value: 5.5424
- type: nauc_precision_at_1_max
value: 54.9196
- type: nauc_precision_at_1_std
value: 29.255399999999998
- type: nauc_precision_at_1_diff1
value: 83.0875
- type: nauc_precision_at_3_max
value: 50.4794
- type: nauc_precision_at_3_std
value: 43.4043
- type: nauc_precision_at_3_diff1
value: 56.4831
- type: nauc_precision_at_5_max
value: 61.213499999999996
- type: nauc_precision_at_5_std
value: 58.540099999999995
- type: nauc_precision_at_5_diff1
value: 42.0099
- type: nauc_precision_at_10_max
value: 33.8003
- type: nauc_precision_at_10_std
value: 37.2919
- type: nauc_precision_at_10_diff1
value: 17.9128
- type: nauc_precision_at_20_max
value: 34.3856
- type: nauc_precision_at_20_std
value: 36.9134
- type: nauc_precision_at_20_diff1
value: 21.3988
- type: nauc_precision_at_100_max
value: 14.2024
- type: nauc_precision_at_100_std
value: 17.9803
- type: nauc_precision_at_100_diff1
value: 10.1473
- type: nauc_precision_at_1000_max
value: 12.4813
- type: nauc_precision_at_1000_std
value: 11.7174
- type: nauc_precision_at_1000_diff1
value: 5.5424
- type: nauc_mrr_at_1_max
value: 54.9196
- type: nauc_mrr_at_1_std
value: 29.255399999999998
- type: nauc_mrr_at_1_diff1
value: 83.0875
- type: nauc_mrr_at_3_max
value: 57.2075
- type: nauc_mrr_at_3_std
value: 43.4043
- type: nauc_mrr_at_3_diff1
value: 69.78529999999999
- type: nauc_mrr_at_5_max
value: 60.711999999999996
- type: nauc_mrr_at_5_std
value: 50.112
- type: nauc_mrr_at_5_diff1
value: 60.0604
- type: nauc_mrr_at_10_max
value: 49.7578
- type: nauc_mrr_at_10_std
value: 43.871300000000005
- type: nauc_mrr_at_10_diff1
value: 45.129599999999996
- type: nauc_mrr_at_20_max
value: 46.7772
- type: nauc_mrr_at_20_std
value: 43.0928
- type: nauc_mrr_at_20_diff1
value: 40.8293
- type: nauc_mrr_at_100_max
value: 37.595299999999995
- type: nauc_mrr_at_100_std
value: 35.288199999999996
- type: nauc_mrr_at_100_diff1
value: 32.1313
- type: nauc_mrr_at_1000_max
value: 34.822199999999995
- type: nauc_mrr_at_1000_std
value: 32.6604
- type: nauc_mrr_at_1000_diff1
value: 29.493599999999997
- type: main_score
value: 0.45799999999999996
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (deu-ara)
type: facebook/mlqa
config: deu-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.243
- type: ndcg_at_3
value: 0.5329999999999999
- type: ndcg_at_5
value: 0.7080000000000001
- type: ndcg_at_10
value: 0.822
- type: ndcg_at_20
value: 1.149
- type: ndcg_at_100
value: 2.443
- type: ndcg_at_1000
value: 9.719999999999999
- type: map_at_1
value: 0.243
- type: map_at_3
value: 0.46499999999999997
- type: map_at_5
value: 0.562
- type: map_at_10
value: 0.607
- type: map_at_20
value: 0.692
- type: map_at_100
value: 0.84
- type: map_at_1000
value: 1.014
- type: recall_at_1
value: 0.243
- type: recall_at_3
value: 0.728
- type: recall_at_5
value: 1.1520000000000001
- type: recall_at_10
value: 1.516
- type: recall_at_20
value: 2.85
- type: recall_at_100
value: 10.309
- type: recall_at_1000
value: 73.681
- type: precision_at_1
value: 0.243
- type: precision_at_3
value: 0.243
- type: precision_at_5
value: 0.22999999999999998
- type: precision_at_10
value: 0.152
- type: precision_at_20
value: 0.14300000000000002
- type: precision_at_100
value: 0.10300000000000001
- type: precision_at_1000
value: 0.074
- type: mrr_at_1
value: 0.2426
- type: mrr_at_3
value: 0.46490000000000004
- type: mrr_at_5
value: 0.562
- type: mrr_at_10
value: 0.6072
- type: mrr_at_20
value: 0.6916
- type: mrr_at_100
value: 0.8397
- type: mrr_at_1000
value: 1.0143
- type: nauc_ndcg_at_1_max
value: 34.470800000000004
- type: nauc_ndcg_at_1_std
value: 17.7296
- type: nauc_ndcg_at_1_diff1
value: 25.4054
- type: nauc_ndcg_at_3_max
value: 51.27589999999999
- type: nauc_ndcg_at_3_std
value: 29.8213
- type: nauc_ndcg_at_3_diff1
value: 19.96
- type: nauc_ndcg_at_5_max
value: 58.739799999999995
- type: nauc_ndcg_at_5_std
value: 24.7685
- type: nauc_ndcg_at_5_diff1
value: 17.957
- type: nauc_ndcg_at_10_max
value: 54.85060000000001
- type: nauc_ndcg_at_10_std
value: 19.6216
- type: nauc_ndcg_at_10_diff1
value: 16.5672
- type: nauc_ndcg_at_20_max
value: 45.870400000000004
- type: nauc_ndcg_at_20_std
value: 14.829500000000001
- type: nauc_ndcg_at_20_diff1
value: 18.0996
- type: nauc_ndcg_at_100_max
value: 33.6706
- type: nauc_ndcg_at_100_std
value: 10.0954
- type: nauc_ndcg_at_100_diff1
value: 9.6092
- type: nauc_ndcg_at_1000_max
value: 25.971300000000003
- type: nauc_ndcg_at_1000_std
value: 4.9195
- type: nauc_ndcg_at_1000_diff1
value: 7.0839
- type: nauc_map_at_1_max
value: 34.470800000000004
- type: nauc_map_at_1_std
value: 17.7296
- type: nauc_map_at_1_diff1
value: 25.4054
- type: nauc_map_at_3_max
value: 49.3966
- type: nauc_map_at_3_std
value: 27.9153
- type: nauc_map_at_3_diff1
value: 20.7442
- type: nauc_map_at_5_max
value: 54.789500000000004
- type: nauc_map_at_5_std
value: 24.4111
- type: nauc_map_at_5_diff1
value: 18.7472
- type: nauc_map_at_10_max
value: 53.115
- type: nauc_map_at_10_std
value: 21.7997
- type: nauc_map_at_10_diff1
value: 18.1703
- type: nauc_map_at_20_max
value: 49.4189
- type: nauc_map_at_20_std
value: 19.4909
- type: nauc_map_at_20_diff1
value: 18.6365
- type: nauc_map_at_100_max
value: 45.3179
- type: nauc_map_at_100_std
value: 17.7435
- type: nauc_map_at_100_diff1
value: 16.0309
- type: nauc_map_at_1000_max
value: 43.352000000000004
- type: nauc_map_at_1000_std
value: 16.3267
- type: nauc_map_at_1000_diff1
value: 15.204300000000002
- type: nauc_recall_at_1_max
value: 34.470800000000004
- type: nauc_recall_at_1_std
value: 17.7296
- type: nauc_recall_at_1_diff1
value: 25.4054
- type: nauc_recall_at_3_max
value: 54.6788
- type: nauc_recall_at_3_std
value: 33.4369
- type: nauc_recall_at_3_diff1
value: 18.488
- type: nauc_recall_at_5_max
value: 64.8516
- type: nauc_recall_at_5_std
value: 25.182100000000002
- type: nauc_recall_at_5_diff1
value: 16.9772
- type: nauc_recall_at_10_max
value: 56.427099999999996
- type: nauc_recall_at_10_std
value: 15.958400000000001
- type: nauc_recall_at_10_diff1
value: 14.3287
- type: nauc_recall_at_20_max
value: 41.0315
- type: nauc_recall_at_20_std
value: 9.7701
- type: nauc_recall_at_20_diff1
value: 17.8564
- type: nauc_recall_at_100_max
value: 27.0754
- type: nauc_recall_at_100_std
value: 6.103
- type: nauc_recall_at_100_diff1
value: 5.9928
- type: nauc_recall_at_1000_max
value: 16.7685
- type: nauc_recall_at_1000_std
value: -0.752
- type: nauc_recall_at_1000_diff1
value: 3.0706
- type: nauc_precision_at_1_max
value: 34.470800000000004
- type: nauc_precision_at_1_std
value: 17.7296
- type: nauc_precision_at_1_diff1
value: 25.4054
- type: nauc_precision_at_3_max
value: 54.6788
- type: nauc_precision_at_3_std
value: 33.4369
- type: nauc_precision_at_3_diff1
value: 18.488
- type: nauc_precision_at_5_max
value: 64.8516
- type: nauc_precision_at_5_std
value: 25.182100000000002
- type: nauc_precision_at_5_diff1
value: 16.9772
- type: nauc_precision_at_10_max
value: 56.427099999999996
- type: nauc_precision_at_10_std
value: 15.958400000000001
- type: nauc_precision_at_10_diff1
value: 14.3287
- type: nauc_precision_at_20_max
value: 41.0315
- type: nauc_precision_at_20_std
value: 9.7701
- type: nauc_precision_at_20_diff1
value: 17.8564
- type: nauc_precision_at_100_max
value: 27.0754
- type: nauc_precision_at_100_std
value: 6.103
- type: nauc_precision_at_100_diff1
value: 5.9928
- type: nauc_precision_at_1000_max
value: 16.7685
- type: nauc_precision_at_1000_std
value: -0.752
- type: nauc_precision_at_1000_diff1
value: 3.0706
- type: nauc_mrr_at_1_max
value: 34.470800000000004
- type: nauc_mrr_at_1_std
value: 17.7296
- type: nauc_mrr_at_1_diff1
value: 25.4054
- type: nauc_mrr_at_3_max
value: 49.3966
- type: nauc_mrr_at_3_std
value: 27.9153
- type: nauc_mrr_at_3_diff1
value: 20.7442
- type: nauc_mrr_at_5_max
value: 54.789500000000004
- type: nauc_mrr_at_5_std
value: 24.4111
- type: nauc_mrr_at_5_diff1
value: 18.7472
- type: nauc_mrr_at_10_max
value: 53.115
- type: nauc_mrr_at_10_std
value: 21.7997
- type: nauc_mrr_at_10_diff1
value: 18.1703
- type: nauc_mrr_at_20_max
value: 49.4189
- type: nauc_mrr_at_20_std
value: 19.4909
- type: nauc_mrr_at_20_diff1
value: 18.6365
- type: nauc_mrr_at_100_max
value: 45.3179
- type: nauc_mrr_at_100_std
value: 17.7435
- type: nauc_mrr_at_100_diff1
value: 16.0309
- type: nauc_mrr_at_1000_max
value: 43.352000000000004
- type: nauc_mrr_at_1000_std
value: 16.3267
- type: nauc_mrr_at_1000_diff1
value: 15.204300000000002
- type: main_score
value: 0.822
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (eng-ara)
type: facebook/mlqa
config: eng-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.375
- type: ndcg_at_3
value: 0.5780000000000001
- type: ndcg_at_5
value: 0.654
- type: ndcg_at_10
value: 0.8250000000000001
- type: ndcg_at_20
value: 1.034
- type: ndcg_at_100
value: 1.7930000000000001
- type: ndcg_at_1000
value: 4.977
- type: map_at_1
value: 0.375
- type: map_at_3
value: 0.525
- type: map_at_5
value: 0.567
- type: map_at_10
value: 0.638
- type: map_at_20
value: 0.696
- type: map_at_100
value: 0.788
- type: map_at_1000
value: 0.868
- type: recall_at_1
value: 0.375
- type: recall_at_3
value: 0.731
- type: recall_at_5
value: 0.919
- type: recall_at_10
value: 1.444
- type: recall_at_20
value: 2.2689999999999997
- type: recall_at_100
value: 6.563
- type: recall_at_1000
value: 34.099000000000004
- type: precision_at_1
value: 0.375
- type: precision_at_3
value: 0.244
- type: precision_at_5
value: 0.184
- type: precision_at_10
value: 0.14400000000000002
- type: precision_at_20
value: 0.11299999999999999
- type: precision_at_100
value: 0.066
- type: precision_at_1000
value: 0.034
- type: mrr_at_1
value: 0.375
- type: mrr_at_3
value: 0.525
- type: mrr_at_5
value: 0.5672
- type: mrr_at_10
value: 0.6383
- type: mrr_at_20
value: 0.6961
- type: mrr_at_100
value: 0.7882
- type: mrr_at_1000
value: 0.8677
- type: nauc_ndcg_at_1_max
value: 56.5121
- type: nauc_ndcg_at_1_std
value: 19.2292
- type: nauc_ndcg_at_1_diff1
value: 18.6031
- type: nauc_ndcg_at_3_max
value: 53.795899999999996
- type: nauc_ndcg_at_3_std
value: 13.674900000000001
- type: nauc_ndcg_at_3_diff1
value: 14.913699999999999
- type: nauc_ndcg_at_5_max
value: 54.0713
- type: nauc_ndcg_at_5_std
value: 16.5134
- type: nauc_ndcg_at_5_diff1
value: 13.835
- type: nauc_ndcg_at_10_max
value: 47.3624
- type: nauc_ndcg_at_10_std
value: 14.0322
- type: nauc_ndcg_at_10_diff1
value: 12.4765
- type: nauc_ndcg_at_20_max
value: 40.5382
- type: nauc_ndcg_at_20_std
value: 13.1801
- type: nauc_ndcg_at_20_diff1
value: 10.8866
- type: nauc_ndcg_at_100_max
value: 27.4861
- type: nauc_ndcg_at_100_std
value: 9.985
- type: nauc_ndcg_at_100_diff1
value: 5.003
- type: nauc_ndcg_at_1000_max
value: 14.236299999999998
- type: nauc_ndcg_at_1000_std
value: 5.5438
- type: nauc_ndcg_at_1000_diff1
value: 3.5621
- type: nauc_map_at_1_max
value: 56.5121
- type: nauc_map_at_1_std
value: 19.2292
- type: nauc_map_at_1_diff1
value: 18.6031
- type: nauc_map_at_3_max
value: 54.069599999999994
- type: nauc_map_at_3_std
value: 14.5317
- type: nauc_map_at_3_diff1
value: 15.2434
- type: nauc_map_at_5_max
value: 54.295
- type: nauc_map_at_5_std
value: 16.362
- type: nauc_map_at_5_diff1
value: 14.560200000000002
- type: nauc_map_at_10_max
value: 50.6652
- type: nauc_map_at_10_std
value: 14.840700000000002
- type: nauc_map_at_10_diff1
value: 13.7079
- type: nauc_map_at_20_max
value: 47.6818
- type: nauc_map_at_20_std
value: 14.355599999999999
- type: nauc_map_at_20_diff1
value: 12.894400000000001
- type: nauc_map_at_100_max
value: 43.4343
- type: nauc_map_at_100_std
value: 13.241
- type: nauc_map_at_100_diff1
value: 11.0841
- type: nauc_map_at_1000_max
value: 40.872
- type: nauc_map_at_1000_std
value: 12.5729
- type: nauc_map_at_1000_diff1
value: 10.5395
- type: nauc_recall_at_1_max
value: 56.5121
- type: nauc_recall_at_1_std
value: 19.2292
- type: nauc_recall_at_1_diff1
value: 18.6031
- type: nauc_recall_at_3_max
value: 53.2864
- type: nauc_recall_at_3_std
value: 11.929499999999999
- type: nauc_recall_at_3_diff1
value: 14.321200000000001
- type: nauc_recall_at_5_max
value: 53.689
- type: nauc_recall_at_5_std
value: 16.997
- type: nauc_recall_at_5_diff1
value: 12.4956
- type: nauc_recall_at_10_max
value: 42.0383
- type: nauc_recall_at_10_std
value: 12.9387
- type: nauc_recall_at_10_diff1
value: 10.699
- type: nauc_recall_at_20_max
value: 31.483
- type: nauc_recall_at_20_std
value: 11.967500000000001
- type: nauc_recall_at_20_diff1
value: 8.6104
- type: nauc_recall_at_100_max
value: 16.9294
- type: nauc_recall_at_100_std
value: 8.0626
- type: nauc_recall_at_100_diff1
value: 0.9781
- type: nauc_recall_at_1000_max
value: 5.0692
- type: nauc_recall_at_1000_std
value: 2.8923
- type: nauc_recall_at_1000_diff1
value: 1.661
- type: nauc_precision_at_1_max
value: 56.5121
- type: nauc_precision_at_1_std
value: 19.2292
- type: nauc_precision_at_1_diff1
value: 18.6031
- type: nauc_precision_at_3_max
value: 53.2864
- type: nauc_precision_at_3_std
value: 11.929499999999999
- type: nauc_precision_at_3_diff1
value: 14.321200000000001
- type: nauc_precision_at_5_max
value: 53.689
- type: nauc_precision_at_5_std
value: 16.997
- type: nauc_precision_at_5_diff1
value: 12.4956
- type: nauc_precision_at_10_max
value: 42.0383
- type: nauc_precision_at_10_std
value: 12.9387
- type: nauc_precision_at_10_diff1
value: 10.699
- type: nauc_precision_at_20_max
value: 31.483
- type: nauc_precision_at_20_std
value: 11.967500000000001
- type: nauc_precision_at_20_diff1
value: 8.6104
- type: nauc_precision_at_100_max
value: 16.9294
- type: nauc_precision_at_100_std
value: 8.0626
- type: nauc_precision_at_100_diff1
value: 0.9781
- type: nauc_precision_at_1000_max
value: 5.0423
- type: nauc_precision_at_1000_std
value: 2.8774
- type: nauc_precision_at_1000_diff1
value: 1.6759
- type: nauc_mrr_at_1_max
value: 56.5121
- type: nauc_mrr_at_1_std
value: 19.2292
- type: nauc_mrr_at_1_diff1
value: 18.6031
- type: nauc_mrr_at_3_max
value: 54.069599999999994
- type: nauc_mrr_at_3_std
value: 14.5317
- type: nauc_mrr_at_3_diff1
value: 15.2434
- type: nauc_mrr_at_5_max
value: 54.295
- type: nauc_mrr_at_5_std
value: 16.362
- type: nauc_mrr_at_5_diff1
value: 14.560200000000002
- type: nauc_mrr_at_10_max
value: 50.6652
- type: nauc_mrr_at_10_std
value: 14.840700000000002
- type: nauc_mrr_at_10_diff1
value: 13.7079
- type: nauc_mrr_at_20_max
value: 47.6818
- type: nauc_mrr_at_20_std
value: 14.355599999999999
- type: nauc_mrr_at_20_diff1
value: 12.894400000000001
- type: nauc_mrr_at_100_max
value: 43.4343
- type: nauc_mrr_at_100_std
value: 13.241
- type: nauc_mrr_at_100_diff1
value: 11.0841
- type: nauc_mrr_at_1000_max
value: 40.8708
- type: nauc_mrr_at_1000_std
value: 12.5722
- type: nauc_mrr_at_1000_diff1
value: 10.54
- type: main_score
value: 0.8250000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (spa-ara)
type: facebook/mlqa
config: spa-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.253
- type: ndcg_at_3
value: 0.418
- type: ndcg_at_5
value: 0.461
- type: ndcg_at_10
value: 0.715
- type: ndcg_at_20
value: 0.9450000000000001
- type: ndcg_at_100
value: 2.331
- type: ndcg_at_1000
value: 8.574
- type: map_at_1
value: 0.253
- type: map_at_3
value: 0.371
- type: map_at_5
value: 0.396
- type: map_at_10
value: 0.505
- type: map_at_20
value: 0.569
- type: map_at_100
value: 0.733
- type: map_at_1000
value: 0.8829999999999999
- type: recall_at_1
value: 0.253
- type: recall_at_3
value: 0.5559999999999999
- type: recall_at_5
value: 0.657
- type: recall_at_10
value: 1.4160000000000001
- type: recall_at_20
value: 2.326
- type: recall_at_100
value: 10.212
- type: recall_at_1000
value: 64.56
- type: precision_at_1
value: 0.253
- type: precision_at_3
value: 0.185
- type: precision_at_5
value: 0.131
- type: precision_at_10
value: 0.14200000000000002
- type: precision_at_20
value: 0.116
- type: precision_at_100
value: 0.10200000000000001
- type: precision_at_1000
value: 0.065
- type: mrr_at_1
value: 0.25279999999999997
- type: mrr_at_3
value: 0.3707
- type: mrr_at_5
value: 0.396
- type: mrr_at_10
value: 0.5054000000000001
- type: mrr_at_20
value: 0.5688000000000001
- type: mrr_at_100
value: 0.7331
- type: mrr_at_1000
value: 0.8831
- type: nauc_ndcg_at_1_max
value: 51.9741
- type: nauc_ndcg_at_1_std
value: 46.907700000000006
- type: nauc_ndcg_at_1_diff1
value: 30.1964
- type: nauc_ndcg_at_3_max
value: 41.3447
- type: nauc_ndcg_at_3_std
value: 24.360599999999998
- type: nauc_ndcg_at_3_diff1
value: 18.8418
- type: nauc_ndcg_at_5_max
value: 41.0319
- type: nauc_ndcg_at_5_std
value: 25.809199999999997
- type: nauc_ndcg_at_5_diff1
value: 24.909100000000002
- type: nauc_ndcg_at_10_max
value: 36.6761
- type: nauc_ndcg_at_10_std
value: 23.1623
- type: nauc_ndcg_at_10_diff1
value: 24.2909
- type: nauc_ndcg_at_20_max
value: 33.2627
- type: nauc_ndcg_at_20_std
value: 19.0886
- type: nauc_ndcg_at_20_diff1
value: 18.6171
- type: nauc_ndcg_at_100_max
value: 22.1033
- type: nauc_ndcg_at_100_std
value: 10.6684
- type: nauc_ndcg_at_100_diff1
value: 6.77
- type: nauc_ndcg_at_1000_max
value: 17.8432
- type: nauc_ndcg_at_1000_std
value: 5.2092
- type: nauc_ndcg_at_1000_diff1
value: 5.8879
- type: nauc_map_at_1_max
value: 51.9741
- type: nauc_map_at_1_std
value: 46.907700000000006
- type: nauc_map_at_1_diff1
value: 30.1964
- type: nauc_map_at_3_max
value: 42.766799999999996
- type: nauc_map_at_3_std
value: 29.0518
- type: nauc_map_at_3_diff1
value: 20.8244
- type: nauc_map_at_5_max
value: 42.464600000000004
- type: nauc_map_at_5_std
value: 29.7317
- type: nauc_map_at_5_diff1
value: 24.799699999999998
- type: nauc_map_at_10_max
value: 39.827600000000004
- type: nauc_map_at_10_std
value: 27.3121
- type: nauc_map_at_10_diff1
value: 24.6463
- type: nauc_map_at_20_max
value: 37.9365
- type: nauc_map_at_20_std
value: 24.8287
- type: nauc_map_at_20_diff1
value: 21.9878
- type: nauc_map_at_100_max
value: 33.333
- type: nauc_map_at_100_std
value: 20.2466
- type: nauc_map_at_100_diff1
value: 16.561
- type: nauc_map_at_1000_max
value: 31.8401
- type: nauc_map_at_1000_std
value: 18.740499999999997
- type: nauc_map_at_1000_diff1
value: 15.820400000000001
- type: nauc_recall_at_1_max
value: 51.9741
- type: nauc_recall_at_1_std
value: 46.907700000000006
- type: nauc_recall_at_1_diff1
value: 30.1964
- type: nauc_recall_at_3_max
value: 38.6984
- type: nauc_recall_at_3_std
value: 15.0644
- type: nauc_recall_at_3_diff1
value: 14.9959
- type: nauc_recall_at_5_max
value: 38.5959
- type: nauc_recall_at_5_std
value: 18.8551
- type: nauc_recall_at_5_diff1
value: 25.474200000000003
- type: nauc_recall_at_10_max
value: 32.6875
- type: nauc_recall_at_10_std
value: 18.4863
- type: nauc_recall_at_10_diff1
value: 23.8654
- type: nauc_recall_at_20_max
value: 28.6992
- type: nauc_recall_at_20_std
value: 14.019100000000002
- type: nauc_recall_at_20_diff1
value: 14.965100000000001
- type: nauc_recall_at_100_max
value: 16.8806
- type: nauc_recall_at_100_std
value: 7.1583
- type: nauc_recall_at_100_diff1
value: 2.6362
- type: nauc_recall_at_1000_max
value: 12.6884
- type: nauc_recall_at_1000_std
value: 0.3778
- type: nauc_recall_at_1000_diff1
value: 2.9179
- type: nauc_precision_at_1_max
value: 51.9741
- type: nauc_precision_at_1_std
value: 46.907700000000006
- type: nauc_precision_at_1_diff1
value: 30.1964
- type: nauc_precision_at_3_max
value: 38.6984
- type: nauc_precision_at_3_std
value: 15.0644
- type: nauc_precision_at_3_diff1
value: 14.9959
- type: nauc_precision_at_5_max
value: 38.5959
- type: nauc_precision_at_5_std
value: 18.8551
- type: nauc_precision_at_5_diff1
value: 25.474200000000003
- type: nauc_precision_at_10_max
value: 32.6875
- type: nauc_precision_at_10_std
value: 18.4863
- type: nauc_precision_at_10_diff1
value: 23.8654
- type: nauc_precision_at_20_max
value: 28.6992
- type: nauc_precision_at_20_std
value: 14.019100000000002
- type: nauc_precision_at_20_diff1
value: 14.965100000000001
- type: nauc_precision_at_100_max
value: 16.8806
- type: nauc_precision_at_100_std
value: 7.1583
- type: nauc_precision_at_100_diff1
value: 2.6362
- type: nauc_precision_at_1000_max
value: 12.6884
- type: nauc_precision_at_1000_std
value: 0.3778
- type: nauc_precision_at_1000_diff1
value: 2.9179
- type: nauc_mrr_at_1_max
value: 51.9741
- type: nauc_mrr_at_1_std
value: 46.907700000000006
- type: nauc_mrr_at_1_diff1
value: 30.1964
- type: nauc_mrr_at_3_max
value: 42.766799999999996
- type: nauc_mrr_at_3_std
value: 29.0518
- type: nauc_mrr_at_3_diff1
value: 20.8244
- type: nauc_mrr_at_5_max
value: 42.464600000000004
- type: nauc_mrr_at_5_std
value: 29.7317
- type: nauc_mrr_at_5_diff1
value: 24.799699999999998
- type: nauc_mrr_at_10_max
value: 39.827600000000004
- type: nauc_mrr_at_10_std
value: 27.3121
- type: nauc_mrr_at_10_diff1
value: 24.6463
- type: nauc_mrr_at_20_max
value: 37.9365
- type: nauc_mrr_at_20_std
value: 24.8287
- type: nauc_mrr_at_20_diff1
value: 21.9878
- type: nauc_mrr_at_100_max
value: 33.333
- type: nauc_mrr_at_100_std
value: 20.2466
- type: nauc_mrr_at_100_diff1
value: 16.561
- type: nauc_mrr_at_1000_max
value: 31.8401
- type: nauc_mrr_at_1000_std
value: 18.740499999999997
- type: nauc_mrr_at_1000_diff1
value: 15.820400000000001
- type: main_score
value: 0.715
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (hin-ara)
type: facebook/mlqa
config: hin-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.328
- type: ndcg_at_3
value: 0.486
- type: ndcg_at_5
value: 0.683
- type: ndcg_at_10
value: 0.997
- type: ndcg_at_20
value: 1.365
- type: ndcg_at_100
value: 2.706
- type: ndcg_at_1000
value: 9.648
- type: map_at_1
value: 0.328
- type: map_at_3
value: 0.44600000000000006
- type: map_at_5
value: 0.553
- type: map_at_10
value: 0.6799999999999999
- type: map_at_20
value: 0.7779999999999999
- type: map_at_100
value: 0.935
- type: map_at_1000
value: 1.0999999999999999
- type: recall_at_1
value: 0.328
- type: recall_at_3
value: 0.601
- type: recall_at_5
value: 1.0919999999999999
- type: recall_at_10
value: 2.075
- type: recall_at_20
value: 3.55
- type: recall_at_100
value: 11.196
- type: recall_at_1000
value: 71.764
- type: precision_at_1
value: 0.328
- type: precision_at_3
value: 0.2
- type: precision_at_5
value: 0.218
- type: precision_at_10
value: 0.208
- type: precision_at_20
value: 0.17700000000000002
- type: precision_at_100
value: 0.11199999999999999
- type: precision_at_1000
value: 0.07200000000000001
- type: mrr_at_1
value: 0.3277
- type: mrr_at_3
value: 0.44600000000000006
- type: mrr_at_5
value: 0.5525
- type: mrr_at_10
value: 0.6796
- type: mrr_at_20
value: 0.7782
- type: mrr_at_100
value: 0.9353999999999999
- type: mrr_at_1000
value: 1.1002
- type: nauc_ndcg_at_1_max
value: 53.9859
- type: nauc_ndcg_at_1_std
value: -15.8864
- type: nauc_ndcg_at_1_diff1
value: 19.794600000000003
- type: nauc_ndcg_at_3_max
value: 50.3487
- type: nauc_ndcg_at_3_std
value: -15.716
- type: nauc_ndcg_at_3_diff1
value: 27.936299999999996
- type: nauc_ndcg_at_5_max
value: 40.6703
- type: nauc_ndcg_at_5_std
value: -14.965600000000002
- type: nauc_ndcg_at_5_diff1
value: 12.5167
- type: nauc_ndcg_at_10_max
value: 28.513500000000004
- type: nauc_ndcg_at_10_std
value: -12.0676
- type: nauc_ndcg_at_10_diff1
value: 9.7136
- type: nauc_ndcg_at_20_max
value: 23.6262
- type: nauc_ndcg_at_20_std
value: -12.1013
- type: nauc_ndcg_at_20_diff1
value: 9.2594
- type: nauc_ndcg_at_100_max
value: 13.739199999999999
- type: nauc_ndcg_at_100_std
value: -6.6952
- type: nauc_ndcg_at_100_diff1
value: 4.2473
- type: nauc_ndcg_at_1000_max
value: 9.275799999999998
- type: nauc_ndcg_at_1000_std
value: -5.5039
- type: nauc_ndcg_at_1000_diff1
value: 2.4499
- type: nauc_map_at_1_max
value: 53.9859
- type: nauc_map_at_1_std
value: -15.8864
- type: nauc_map_at_1_diff1
value: 19.794600000000003
- type: nauc_map_at_3_max
value: 51.153800000000004
- type: nauc_map_at_3_std
value: -15.7911
- type: nauc_map_at_3_diff1
value: 26.674599999999998
- type: nauc_map_at_5_max
value: 44.6463
- type: nauc_map_at_5_std
value: -15.310699999999999
- type: nauc_map_at_5_diff1
value: 16.8168
- type: nauc_map_at_10_max
value: 36.5886
- type: nauc_map_at_10_std
value: -13.2727
- type: nauc_map_at_10_diff1
value: 14.392199999999999
- type: nauc_map_at_20_max
value: 33.772200000000005
- type: nauc_map_at_20_std
value: -13.108500000000001
- type: nauc_map_at_20_diff1
value: 13.7855
- type: nauc_map_at_100_max
value: 28.4893
- type: nauc_map_at_100_std
value: -11.2989
- type: nauc_map_at_100_diff1
value: 11.4836
- type: nauc_map_at_1000_max
value: 26.9177
- type: nauc_map_at_1000_std
value: -11.165
- type: nauc_map_at_1000_diff1
value: 10.600999999999999
- type: nauc_recall_at_1_max
value: 53.9859
- type: nauc_recall_at_1_std
value: -15.8864
- type: nauc_recall_at_1_diff1
value: 19.794600000000003
- type: nauc_recall_at_3_max
value: 48.5745
- type: nauc_recall_at_3_std
value: -15.5412
- type: nauc_recall_at_3_diff1
value: 30.583900000000003
- type: nauc_recall_at_5_max
value: 34.0788
- type: nauc_recall_at_5_std
value: -14.3783
- type: nauc_recall_at_5_diff1
value: 4.9851
- type: nauc_recall_at_10_max
value: 19.0897
- type: nauc_recall_at_10_std
value: -10.734
- type: nauc_recall_at_10_diff1
value: 4.2515
- type: nauc_recall_at_20_max
value: 14.646
- type: nauc_recall_at_20_std
value: -11.3526
- type: nauc_recall_at_20_diff1
value: 5.4940999999999995
- type: nauc_recall_at_100_max
value: 7.383000000000001
- type: nauc_recall_at_100_std
value: -4.1648
- type: nauc_recall_at_100_diff1
value: 0.9353
- type: nauc_recall_at_1000_max
value: 2.4582
- type: nauc_recall_at_1000_std
value: -1.7946
- type: nauc_recall_at_1000_diff1
value: -0.0116
- type: nauc_precision_at_1_max
value: 53.9859
- type: nauc_precision_at_1_std
value: -15.8864
- type: nauc_precision_at_1_diff1
value: 19.794600000000003
- type: nauc_precision_at_3_max
value: 48.5745
- type: nauc_precision_at_3_std
value: -15.5412
- type: nauc_precision_at_3_diff1
value: 30.583900000000003
- type: nauc_precision_at_5_max
value: 34.0788
- type: nauc_precision_at_5_std
value: -14.3783
- type: nauc_precision_at_5_diff1
value: 4.9851
- type: nauc_precision_at_10_max
value: 19.0897
- type: nauc_precision_at_10_std
value: -10.734
- type: nauc_precision_at_10_diff1
value: 4.2515
- type: nauc_precision_at_20_max
value: 14.646
- type: nauc_precision_at_20_std
value: -11.3526
- type: nauc_precision_at_20_diff1
value: 5.4940999999999995
- type: nauc_precision_at_100_max
value: 7.383000000000001
- type: nauc_precision_at_100_std
value: -4.1648
- type: nauc_precision_at_100_diff1
value: 0.9353
- type: nauc_precision_at_1000_max
value: 2.4582
- type: nauc_precision_at_1000_std
value: -1.7946
- type: nauc_precision_at_1000_diff1
value: -0.0116
- type: nauc_mrr_at_1_max
value: 53.9859
- type: nauc_mrr_at_1_std
value: -15.8864
- type: nauc_mrr_at_1_diff1
value: 19.794600000000003
- type: nauc_mrr_at_3_max
value: 51.153800000000004
- type: nauc_mrr_at_3_std
value: -15.7911
- type: nauc_mrr_at_3_diff1
value: 26.674599999999998
- type: nauc_mrr_at_5_max
value: 44.6463
- type: nauc_mrr_at_5_std
value: -15.310699999999999
- type: nauc_mrr_at_5_diff1
value: 16.8168
- type: nauc_mrr_at_10_max
value: 36.5886
- type: nauc_mrr_at_10_std
value: -13.2727
- type: nauc_mrr_at_10_diff1
value: 14.392199999999999
- type: nauc_mrr_at_20_max
value: 33.772200000000005
- type: nauc_mrr_at_20_std
value: -13.108500000000001
- type: nauc_mrr_at_20_diff1
value: 13.7855
- type: nauc_mrr_at_100_max
value: 28.4893
- type: nauc_mrr_at_100_std
value: -11.2989
- type: nauc_mrr_at_100_diff1
value: 11.4836
- type: nauc_mrr_at_1000_max
value: 26.9177
- type: nauc_mrr_at_1000_std
value: -11.165
- type: nauc_mrr_at_1000_diff1
value: 10.600999999999999
- type: main_score
value: 0.997
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (vie-ara)
type: facebook/mlqa
config: vie-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.391
- type: ndcg_at_3
value: 0.612
- type: ndcg_at_5
value: 0.795
- type: ndcg_at_10
value: 0.9820000000000001
- type: ndcg_at_20
value: 1.239
- type: ndcg_at_100
value: 2.341
- type: ndcg_at_1000
value: 8.206
- type: map_at_1
value: 0.391
- type: map_at_3
value: 0.5539999999999999
- type: map_at_5
value: 0.656
- type: map_at_10
value: 0.733
- type: map_at_20
value: 0.8019999999999999
- type: map_at_100
value: 0.9329999999999999
- type: map_at_1000
value: 1.069
- type: recall_at_1
value: 0.391
- type: recall_at_3
value: 0.782
- type: recall_at_5
value: 1.221
- type: recall_at_10
value: 1.8079999999999998
- type: recall_at_20
value: 2.833
- type: recall_at_100
value: 9.086
- type: recall_at_1000
value: 60.479000000000006
- type: precision_at_1
value: 0.391
- type: precision_at_3
value: 0.261
- type: precision_at_5
value: 0.244
- type: precision_at_10
value: 0.181
- type: precision_at_20
value: 0.14200000000000002
- type: precision_at_100
value: 0.091
- type: precision_at_1000
value: 0.06
- type: mrr_at_1
value: 0.3908
- type: mrr_at_3
value: 0.5537000000000001
- type: mrr_at_5
value: 0.6562
- type: mrr_at_10
value: 0.7326
- type: mrr_at_20
value: 0.8019999999999999
- type: mrr_at_100
value: 0.9327
- type: mrr_at_1000
value: 1.069
- type: nauc_ndcg_at_1_max
value: 22.3169
- type: nauc_ndcg_at_1_std
value: -17.4758
- type: nauc_ndcg_at_1_diff1
value: 1.8166000000000002
- type: nauc_ndcg_at_3_max
value: 17.6929
- type: nauc_ndcg_at_3_std
value: 9.7291
- type: nauc_ndcg_at_3_diff1
value: 7.194599999999999
- type: nauc_ndcg_at_5_max
value: 14.1354
- type: nauc_ndcg_at_5_std
value: 13.7104
- type: nauc_ndcg_at_5_diff1
value: 8.8759
- type: nauc_ndcg_at_10_max
value: 21.5601
- type: nauc_ndcg_at_10_std
value: 16.240299999999998
- type: nauc_ndcg_at_10_diff1
value: 5.8809000000000005
- type: nauc_ndcg_at_20_max
value: 22.5519
- type: nauc_ndcg_at_20_std
value: 15.6586
- type: nauc_ndcg_at_20_diff1
value: 8.152099999999999
- type: nauc_ndcg_at_100_max
value: 18.656100000000002
- type: nauc_ndcg_at_100_std
value: 9.4551
- type: nauc_ndcg_at_100_diff1
value: 7.2737
- type: nauc_ndcg_at_1000_max
value: 11.1981
- type: nauc_ndcg_at_1000_std
value: 5.075699999999999
- type: nauc_ndcg_at_1000_diff1
value: 1.3835
- type: nauc_map_at_1_max
value: 22.3169
- type: nauc_map_at_1_std
value: -17.4758
- type: nauc_map_at_1_diff1
value: 1.8166000000000002
- type: nauc_map_at_3_max
value: 18.4824
- type: nauc_map_at_3_std
value: 4.9891
- type: nauc_map_at_3_diff1
value: 7.0646
- type: nauc_map_at_5_max
value: 15.9382
- type: nauc_map_at_5_std
value: 8.3427
- type: nauc_map_at_5_diff1
value: 8.2007
- type: nauc_map_at_10_max
value: 19.8876
- type: nauc_map_at_10_std
value: 10.2508
- type: nauc_map_at_10_diff1
value: 6.5514
- type: nauc_map_at_20_max
value: 20.333499999999997
- type: nauc_map_at_20_std
value: 10.3019
- type: nauc_map_at_20_diff1
value: 7.6846
- type: nauc_map_at_100_max
value: 19.386
- type: nauc_map_at_100_std
value: 9.1304
- type: nauc_map_at_100_diff1
value: 7.4995
- type: nauc_map_at_1000_max
value: 18.398
- type: nauc_map_at_1000_std
value: 8.7011
- type: nauc_map_at_1000_diff1
value: 6.6249
- type: nauc_recall_at_1_max
value: 22.3169
- type: nauc_recall_at_1_std
value: -17.4758
- type: nauc_recall_at_1_diff1
value: 1.8166000000000002
- type: nauc_recall_at_3_max
value: 16.0786
- type: nauc_recall_at_3_std
value: 19.45
- type: nauc_recall_at_3_diff1
value: 7.2306
- type: nauc_recall_at_5_max
value: 11.106
- type: nauc_recall_at_5_std
value: 22.3805
- type: nauc_recall_at_5_diff1
value: 9.905100000000001
- type: nauc_recall_at_10_max
value: 24.482599999999998
- type: nauc_recall_at_10_std
value: 23.9065
- type: nauc_recall_at_10_diff1
value: 4.6589
- type: nauc_recall_at_20_max
value: 25.4127
- type: nauc_recall_at_20_std
value: 20.5898
- type: nauc_recall_at_20_diff1
value: 8.5451
- type: nauc_recall_at_100_max
value: 17.8939
- type: nauc_recall_at_100_std
value: 8.286200000000001
- type: nauc_recall_at_100_diff1
value: 7.000299999999999
- type: nauc_recall_at_1000_max
value: 6.693499999999999
- type: nauc_recall_at_1000_std
value: 1.6481
- type: nauc_recall_at_1000_diff1
value: -1.6732
- type: nauc_precision_at_1_max
value: 22.3169
- type: nauc_precision_at_1_std
value: -17.4758
- type: nauc_precision_at_1_diff1
value: 1.8166000000000002
- type: nauc_precision_at_3_max
value: 16.0786
- type: nauc_precision_at_3_std
value: 19.45
- type: nauc_precision_at_3_diff1
value: 7.2306
- type: nauc_precision_at_5_max
value: 11.106
- type: nauc_precision_at_5_std
value: 22.3805
- type: nauc_precision_at_5_diff1
value: 9.905100000000001
- type: nauc_precision_at_10_max
value: 24.482599999999998
- type: nauc_precision_at_10_std
value: 23.9065
- type: nauc_precision_at_10_diff1
value: 4.6589
- type: nauc_precision_at_20_max
value: 25.4127
- type: nauc_precision_at_20_std
value: 20.5898
- type: nauc_precision_at_20_diff1
value: 8.5451
- type: nauc_precision_at_100_max
value: 17.8939
- type: nauc_precision_at_100_std
value: 8.286200000000001
- type: nauc_precision_at_100_diff1
value: 7.000299999999999
- type: nauc_precision_at_1000_max
value: 6.693499999999999
- type: nauc_precision_at_1000_std
value: 1.6481
- type: nauc_precision_at_1000_diff1
value: -1.6732
- type: nauc_mrr_at_1_max
value: 22.3169
- type: nauc_mrr_at_1_std
value: -17.4758
- type: nauc_mrr_at_1_diff1
value: 1.8166000000000002
- type: nauc_mrr_at_3_max
value: 18.4824
- type: nauc_mrr_at_3_std
value: 4.9891
- type: nauc_mrr_at_3_diff1
value: 7.0646
- type: nauc_mrr_at_5_max
value: 15.9382
- type: nauc_mrr_at_5_std
value: 8.3427
- type: nauc_mrr_at_5_diff1
value: 8.2007
- type: nauc_mrr_at_10_max
value: 19.8876
- type: nauc_mrr_at_10_std
value: 10.2508
- type: nauc_mrr_at_10_diff1
value: 6.5514
- type: nauc_mrr_at_20_max
value: 20.333499999999997
- type: nauc_mrr_at_20_std
value: 10.3019
- type: nauc_mrr_at_20_diff1
value: 7.6846
- type: nauc_mrr_at_100_max
value: 19.386
- type: nauc_mrr_at_100_std
value: 9.1304
- type: nauc_mrr_at_100_diff1
value: 7.4995
- type: nauc_mrr_at_1000_max
value: 18.398
- type: nauc_mrr_at_1000_std
value: 8.7011
- type: nauc_mrr_at_1000_diff1
value: 6.6249
- type: main_score
value: 0.9820000000000001
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (zho-ara)
type: facebook/mlqa
config: zho-ara
split: test
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: ndcg_at_1
value: 0.942
- type: ndcg_at_3
value: 1.093
- type: ndcg_at_5
value: 1.2189999999999999
- type: ndcg_at_10
value: 1.5010000000000001
- type: ndcg_at_20
value: 1.7500000000000002
- type: ndcg_at_100
value: 2.979
- type: ndcg_at_1000
value: 9.001000000000001
- type: map_at_1
value: 0.942
- type: map_at_3
value: 1.055
- type: map_at_5
value: 1.123
- type: map_at_10
value: 1.236
- type: map_at_20
value: 1.303
- type: map_at_100
value: 1.447
- type: map_at_1000
value: 1.587
- type: recall_at_1
value: 0.942
- type: recall_at_3
value: 1.204
- type: recall_at_5
value: 1.518
- type: recall_at_10
value: 2.407
- type: recall_at_20
value: 3.401
- type: recall_at_100
value: 10.413
- type: recall_at_1000
value: 63.239000000000004
- type: precision_at_1
value: 0.942
- type: precision_at_3
value: 0.40099999999999997
- type: precision_at_5
value: 0.304
- type: precision_at_10
value: 0.241
- type: precision_at_20
value: 0.16999999999999998
- type: precision_at_100
value: 0.104
- type: precision_at_1000
value: 0.063
- type: mrr_at_1
value: 0.9419000000000001
- type: mrr_at_3
value: 1.0553
- type: mrr_at_5
value: 1.1233
- type: mrr_at_10
value: 1.2364
- type: mrr_at_20
value: 1.3032
- type: mrr_at_100
value: 1.4472
- type: mrr_at_1000
value: 1.5868
- type: nauc_ndcg_at_1_max
value: 44.329
- type: nauc_ndcg_at_1_std
value: -22.1462
- type: nauc_ndcg_at_1_diff1
value: 54.6924
- type: nauc_ndcg_at_3_max
value: 44.3874
- type: nauc_ndcg_at_3_std
value: -12.476700000000001
- type: nauc_ndcg_at_3_diff1
value: 43.205799999999996
- type: nauc_ndcg_at_5_max
value: 40.2294
- type: nauc_ndcg_at_5_std
value: -7.8638
- type: nauc_ndcg_at_5_diff1
value: 41.3091
- type: nauc_ndcg_at_10_max
value: 38.2905
- type: nauc_ndcg_at_10_std
value: -5.8234
- type: nauc_ndcg_at_10_diff1
value: 35.6644
- type: nauc_ndcg_at_20_max
value: 32.7502
- type: nauc_ndcg_at_20_std
value: -3.6723
- type: nauc_ndcg_at_20_diff1
value: 32.0788
- type: nauc_ndcg_at_100_max
value: 18.657899999999998
- type: nauc_ndcg_at_100_std
value: 0.0926
- type: nauc_ndcg_at_100_diff1
value: 19.2937
- type: nauc_ndcg_at_1000_max
value: 12.2758
- type: nauc_ndcg_at_1000_std
value: -2.3555
- type: nauc_ndcg_at_1000_diff1
value: 13.314100000000002
- type: nauc_map_at_1_max
value: 44.329
- type: nauc_map_at_1_std
value: -22.1462
- type: nauc_map_at_1_diff1
value: 54.6924
- type: nauc_map_at_3_max
value: 44.405699999999996
- type: nauc_map_at_3_std
value: -14.424600000000002
- type: nauc_map_at_3_diff1
value: 45.6364
- type: nauc_map_at_5_max
value: 42.0327
- type: nauc_map_at_5_std
value: -11.7529
- type: nauc_map_at_5_diff1
value: 44.4403
- type: nauc_map_at_10_max
value: 40.7915
- type: nauc_map_at_10_std
value: -10.4077
- type: nauc_map_at_10_diff1
value: 41.1685
- type: nauc_map_at_20_max
value: 38.574799999999996
- type: nauc_map_at_20_std
value: -9.4044
- type: nauc_map_at_20_diff1
value: 39.5908
- type: nauc_map_at_100_max
value: 34.6009
- type: nauc_map_at_100_std
value: -7.71
- type: nauc_map_at_100_diff1
value: 35.6646
- type: nauc_map_at_1000_max
value: 33.46
- type: nauc_map_at_1000_std
value: -7.535500000000001
- type: nauc_map_at_1000_diff1
value: 34.6565
- type: nauc_recall_at_1_max
value: 44.329
- type: nauc_recall_at_1_std
value: -22.1462
- type: nauc_recall_at_1_diff1
value: 54.6924
- type: nauc_recall_at_3_max
value: 44.3297
- type: nauc_recall_at_3_std
value: -7.5964
- type: nauc_recall_at_3_diff1
value: 37.0708
- type: nauc_recall_at_5_max
value: 35.8238
- type: nauc_recall_at_5_std
value: 1.0823
- type: nauc_recall_at_5_diff1
value: 34.3532
- type: nauc_recall_at_10_max
value: 34.007
- type: nauc_recall_at_10_std
value: 1.8081
- type: nauc_recall_at_10_diff1
value: 26.466099999999997
- type: nauc_recall_at_20_max
value: 24.140900000000002
- type: nauc_recall_at_20_std
value: 4.0295
- type: nauc_recall_at_20_diff1
value: 21.781100000000002
- type: nauc_recall_at_100_max
value: 6.908499999999999
- type: nauc_recall_at_100_std
value: 4.5512
- type: nauc_recall_at_100_diff1
value: 7.940600000000001
- type: nauc_recall_at_1000_max
value: 0.2262
- type: nauc_recall_at_1000_std
value: -2.7483
- type: nauc_recall_at_1000_diff1
value: 1.2992
- type: nauc_precision_at_1_max
value: 44.329
- type: nauc_precision_at_1_std
value: -22.1462
- type: nauc_precision_at_1_diff1
value: 54.6924
- type: nauc_precision_at_3_max
value: 44.3297
- type: nauc_precision_at_3_std
value: -7.5964
- type: nauc_precision_at_3_diff1
value: 37.0708
- type: nauc_precision_at_5_max
value: 35.8238
- type: nauc_precision_at_5_std
value: 1.0823
- type: nauc_precision_at_5_diff1
value: 34.3532
- type: nauc_precision_at_10_max
value: 34.007
- type: nauc_precision_at_10_std
value: 1.8081
- type: nauc_precision_at_10_diff1
value: 26.466099999999997
- type: nauc_precision_at_20_max
value: 24.140900000000002
- type: nauc_precision_at_20_std
value: 4.0295
- type: nauc_precision_at_20_diff1
value: 21.781100000000002
- type: nauc_precision_at_100_max
value: 6.908499999999999
- type: nauc_precision_at_100_std
value: 4.5512
- type: nauc_precision_at_100_diff1
value: 7.940600000000001
- type: nauc_precision_at_1000_max
value: 0.3281
- type: nauc_precision_at_1000_std
value: -2.6999
- type: nauc_precision_at_1000_diff1
value: 1.2890000000000001
- type: nauc_mrr_at_1_max
value: 44.329
- type: nauc_mrr_at_1_std
value: -22.1462
- type: nauc_mrr_at_1_diff1
value: 54.6924
- type: nauc_mrr_at_3_max
value: 44.405699999999996
- type: nauc_mrr_at_3_std
value: -14.424600000000002
- type: nauc_mrr_at_3_diff1
value: 45.6364
- type: nauc_mrr_at_5_max
value: 42.0327
- type: nauc_mrr_at_5_std
value: -11.7529
- type: nauc_mrr_at_5_diff1
value: 44.4403
- type: nauc_mrr_at_10_max
value: 40.7915
- type: nauc_mrr_at_10_std
value: -10.4077
- type: nauc_mrr_at_10_diff1
value: 41.1685
- type: nauc_mrr_at_20_max
value: 38.574799999999996
- type: nauc_mrr_at_20_std
value: -9.4044
- type: nauc_mrr_at_20_diff1
value: 39.5908
- type: nauc_mrr_at_100_max
value: 34.6009
- type: nauc_mrr_at_100_std
value: -7.71
- type: nauc_mrr_at_100_diff1
value: 35.6646
- type: nauc_mrr_at_1000_max
value: 33.461800000000004
- type: nauc_mrr_at_1000_std
value: -7.5348
- type: nauc_mrr_at_1000_diff1
value: 34.6565
- type: main_score
value: 1.5010000000000001
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: jinaai/mintakaqa
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: ndcg_at_1
value: 7.308000000000001
- type: ndcg_at_3
value: 10.071
- type: ndcg_at_5
value: 10.985
- type: ndcg_at_10
value: 12.306000000000001
- type: ndcg_at_20
value: 13.205
- type: ndcg_at_100
value: 14.701
- type: ndcg_at_1000
value: 20.005
- type: map_at_1
value: 7.308000000000001
- type: map_at_3
value: 9.366
- type: map_at_5
value: 9.872
- type: map_at_10
value: 10.424999999999999
- type: map_at_20
value: 10.674999999999999
- type: map_at_100
value: 10.859
- type: map_at_1000
value: 10.984
- type: recall_at_1
value: 7.308000000000001
- type: recall_at_3
value: 12.120000000000001
- type: recall_at_5
value: 14.344000000000001
- type: recall_at_10
value: 18.384
- type: recall_at_20
value: 21.925
- type: recall_at_100
value: 30.322
- type: recall_at_1000
value: 76.668
- type: precision_at_1
value: 7.308000000000001
- type: precision_at_3
value: 4.04
- type: precision_at_5
value: 2.869
- type: precision_at_10
value: 1.838
- type: precision_at_20
value: 1.0959999999999999
- type: precision_at_100
value: 0.303
- type: precision_at_1000
value: 0.077
- type: mrr_at_1
value: 7.308199999999999
- type: mrr_at_3
value: 9.366
- type: mrr_at_5
value: 9.8721
- type: mrr_at_10
value: 10.4255
- type: mrr_at_20
value: 10.6746
- type: mrr_at_100
value: 10.8587
- type: mrr_at_1000
value: 10.9839
- type: nauc_ndcg_at_1_max
value: 21.783
- type: nauc_ndcg_at_1_std
value: 20.8127
- type: nauc_ndcg_at_1_diff1
value: 21.791
- type: nauc_ndcg_at_3_max
value: 18.2102
- type: nauc_ndcg_at_3_std
value: 17.9469
- type: nauc_ndcg_at_3_diff1
value: 14.283399999999999
- type: nauc_ndcg_at_5_max
value: 18.4726
- type: nauc_ndcg_at_5_std
value: 19.3571
- type: nauc_ndcg_at_5_diff1
value: 13.2607
- type: nauc_ndcg_at_10_max
value: 18.5108
- type: nauc_ndcg_at_10_std
value: 21.5774
- type: nauc_ndcg_at_10_diff1
value: 11.7807
- type: nauc_ndcg_at_20_max
value: 18.4889
- type: nauc_ndcg_at_20_std
value: 22.3138
- type: nauc_ndcg_at_20_diff1
value: 12.0277
- type: nauc_ndcg_at_100_max
value: 17.5017
- type: nauc_ndcg_at_100_std
value: 21.1196
- type: nauc_ndcg_at_100_diff1
value: 11.5115
- type: nauc_ndcg_at_1000_max
value: 17.2058
- type: nauc_ndcg_at_1000_std
value: 20.3049
- type: nauc_ndcg_at_1000_diff1
value: 11.5737
- type: nauc_map_at_1_max
value: 21.783
- type: nauc_map_at_1_std
value: 20.8127
- type: nauc_map_at_1_diff1
value: 21.791
- type: nauc_map_at_3_max
value: 18.8523
- type: nauc_map_at_3_std
value: 18.4494
- type: nauc_map_at_3_diff1
value: 15.720899999999999
- type: nauc_map_at_5_max
value: 19.0264
- type: nauc_map_at_5_std
value: 19.329
- type: nauc_map_at_5_diff1
value: 15.057100000000002
- type: nauc_map_at_10_max
value: 19.038
- type: nauc_map_at_10_std
value: 20.3913
- type: nauc_map_at_10_diff1
value: 14.2778
- type: nauc_map_at_20_max
value: 19.0167
- type: nauc_map_at_20_std
value: 20.6651
- type: nauc_map_at_20_diff1
value: 14.2818
- type: nauc_map_at_100_max
value: 18.8506
- type: nauc_map_at_100_std
value: 20.5035
- type: nauc_map_at_100_diff1
value: 14.194300000000002
- type: nauc_map_at_1000_max
value: 18.814600000000002
- type: nauc_map_at_1000_std
value: 20.4537
- type: nauc_map_at_1000_diff1
value: 14.1742
- type: nauc_recall_at_1_max
value: 21.783
- type: nauc_recall_at_1_std
value: 20.8127
- type: nauc_recall_at_1_diff1
value: 21.791
- type: nauc_recall_at_3_max
value: 16.7429
- type: nauc_recall_at_3_std
value: 16.8033
- type: nauc_recall_at_3_diff1
value: 10.9673
- type: nauc_recall_at_5_max
value: 17.305400000000002
- type: nauc_recall_at_5_std
value: 19.543
- type: nauc_recall_at_5_diff1
value: 9.339
- type: nauc_recall_at_10_max
value: 17.5378
- type: nauc_recall_at_10_std
value: 24.3867
- type: nauc_recall_at_10_diff1
value: 6.776
- type: nauc_recall_at_20_max
value: 17.6106
- type: nauc_recall_at_20_std
value: 25.9784
- type: nauc_recall_at_20_diff1
value: 8.1176
- type: nauc_recall_at_100_max
value: 14.5343
- type: nauc_recall_at_100_std
value: 21.406
- type: nauc_recall_at_100_diff1
value: 6.8826
- type: nauc_recall_at_1000_max
value: 11.740200000000002
- type: nauc_recall_at_1000_std
value: 16.5951
- type: nauc_recall_at_1000_diff1
value: 5.6598999999999995
- type: nauc_precision_at_1_max
value: 21.783
- type: nauc_precision_at_1_std
value: 20.8127
- type: nauc_precision_at_1_diff1
value: 21.791
- type: nauc_precision_at_3_max
value: 16.7429
- type: nauc_precision_at_3_std
value: 16.8033
- type: nauc_precision_at_3_diff1
value: 10.9673
- type: nauc_precision_at_5_max
value: 17.305400000000002
- type: nauc_precision_at_5_std
value: 19.543
- type: nauc_precision_at_5_diff1
value: 9.339
- type: nauc_precision_at_10_max
value: 17.5378
- type: nauc_precision_at_10_std
value: 24.3867
- type: nauc_precision_at_10_diff1
value: 6.776
- type: nauc_precision_at_20_max
value: 17.6106
- type: nauc_precision_at_20_std
value: 25.9784
- type: nauc_precision_at_20_diff1
value: 8.1176
- type: nauc_precision_at_100_max
value: 14.5343
- type: nauc_precision_at_100_std
value: 21.406
- type: nauc_precision_at_100_diff1
value: 6.8826
- type: nauc_precision_at_1000_max
value: 11.740200000000002
- type: nauc_precision_at_1000_std
value: 16.5951
- type: nauc_precision_at_1000_diff1
value: 5.6598999999999995
- type: nauc_mrr_at_1_max
value: 21.783
- type: nauc_mrr_at_1_std
value: 20.8127
- type: nauc_mrr_at_1_diff1
value: 21.791
- type: nauc_mrr_at_3_max
value: 18.8523
- type: nauc_mrr_at_3_std
value: 18.4494
- type: nauc_mrr_at_3_diff1
value: 15.720899999999999
- type: nauc_mrr_at_5_max
value: 19.0264
- type: nauc_mrr_at_5_std
value: 19.329
- type: nauc_mrr_at_5_diff1
value: 15.057100000000002
- type: nauc_mrr_at_10_max
value: 19.038
- type: nauc_mrr_at_10_std
value: 20.3913
- type: nauc_mrr_at_10_diff1
value: 14.2778
- type: nauc_mrr_at_20_max
value: 19.0167
- type: nauc_mrr_at_20_std
value: 20.6651
- type: nauc_mrr_at_20_diff1
value: 14.2818
- type: nauc_mrr_at_100_max
value: 18.8506
- type: nauc_mrr_at_100_std
value: 20.5035
- type: nauc_mrr_at_100_diff1
value: 14.194300000000002
- type: nauc_mrr_at_1000_max
value: 18.814600000000002
- type: nauc_mrr_at_1000_std
value: 20.4537
- type: nauc_mrr_at_1000_diff1
value: 14.1742
- type: main_score
value: 12.306000000000001
- task:
type: Retrieval
dataset:
name: MTEB MrTidyRetrieval (arabic)
type: mteb/mrtidy
config: arabic
split: test
revision: fc24a3ce8f09746410daee3d5cd823ff7a0675b7
metrics:
- type: ndcg_at_1
value: 2.128
- type: ndcg_at_3
value: 2.632
- type: ndcg_at_5
value: 3.2329999999999997
- type: ndcg_at_10
value: 3.9469999999999996
- type: ndcg_at_20
value: 4.4479999999999995
- type: ndcg_at_100
value: 6.2330000000000005
- type: ndcg_at_1000
value: 8.812000000000001
- type: map_at_1
value: 1.989
- type: map_at_3
value: 2.444
- type: map_at_5
value: 2.786
- type: map_at_10
value: 3.078
- type: map_at_20
value: 3.2099999999999995
- type: map_at_100
value: 3.42
- type: map_at_1000
value: 3.497
- type: recall_at_1
value: 1.989
- type: recall_at_3
value: 3.006
- type: recall_at_5
value: 4.394
- type: recall_at_10
value: 6.614000000000001
- type: recall_at_20
value: 8.511000000000001
- type: recall_at_100
value: 18.378
- type: recall_at_1000
value: 39.300000000000004
- type: precision_at_1
value: 2.128
- type: precision_at_3
value: 1.079
- type: precision_at_5
value: 0.962
- type: precision_at_10
value: 0.712
- type: precision_at_20
value: 0.47200000000000003
- type: precision_at_100
value: 0.20500000000000002
- type: precision_at_1000
value: 0.044000000000000004
- type: mrr_at_1
value: 2.1277
- type: mrr_at_3
value: 2.621
- type: mrr_at_5
value: 2.9726
- type: mrr_at_10
value: 3.2579
- type: mrr_at_20
value: 3.4111000000000002
- type: mrr_at_100
value: 3.6346999999999996
- type: mrr_at_1000
value: 3.7098
- type: nauc_ndcg_at_1_max
value: 9.8338
- type: nauc_ndcg_at_1_std
value: -12.548
- type: nauc_ndcg_at_1_diff1
value: 23.988100000000003
- type: nauc_ndcg_at_3_max
value: 14.5487
- type: nauc_ndcg_at_3_std
value: -14.249400000000001
- type: nauc_ndcg_at_3_diff1
value: 24.1887
- type: nauc_ndcg_at_5_max
value: 15.2084
- type: nauc_ndcg_at_5_std
value: -12.0395
- type: nauc_ndcg_at_5_diff1
value: 21.9387
- type: nauc_ndcg_at_10_max
value: 16.49
- type: nauc_ndcg_at_10_std
value: -9.2455
- type: nauc_ndcg_at_10_diff1
value: 19.6085
- type: nauc_ndcg_at_20_max
value: 16.7376
- type: nauc_ndcg_at_20_std
value: -7.4205
- type: nauc_ndcg_at_20_diff1
value: 17.7278
- type: nauc_ndcg_at_100_max
value: 12.4233
- type: nauc_ndcg_at_100_std
value: -5.614800000000001
- type: nauc_ndcg_at_100_diff1
value: 14.599799999999998
- type: nauc_ndcg_at_1000_max
value: 14.0367
- type: nauc_ndcg_at_1000_std
value: -4.0573
- type: nauc_ndcg_at_1000_diff1
value: 15.4415
- type: nauc_map_at_1_max
value: 12.962499999999999
- type: nauc_map_at_1_std
value: -11.679599999999999
- type: nauc_map_at_1_diff1
value: 24.3343
- type: nauc_map_at_3_max
value: 14.8937
- type: nauc_map_at_3_std
value: -13.460700000000001
- type: nauc_map_at_3_diff1
value: 24.3587
- type: nauc_map_at_5_max
value: 15.174299999999999
- type: nauc_map_at_5_std
value: -12.3433
- type: nauc_map_at_5_diff1
value: 22.753899999999998
- type: nauc_map_at_10_max
value: 15.7631
- type: nauc_map_at_10_std
value: -10.7924
- type: nauc_map_at_10_diff1
value: 21.3339
- type: nauc_map_at_20_max
value: 15.8264
- type: nauc_map_at_20_std
value: -10.1158
- type: nauc_map_at_20_diff1
value: 20.6053
- type: nauc_map_at_100_max
value: 14.8213
- type: nauc_map_at_100_std
value: -9.7321
- type: nauc_map_at_100_diff1
value: 19.7135
- type: nauc_map_at_1000_max
value: 14.8924
- type: nauc_map_at_1000_std
value: -9.5351
- type: nauc_map_at_1000_diff1
value: 19.6631
- type: nauc_recall_at_1_max
value: 12.962499999999999
- type: nauc_recall_at_1_std
value: -11.679599999999999
- type: nauc_recall_at_1_diff1
value: 24.3343
- type: nauc_recall_at_3_max
value: 16.7586
- type: nauc_recall_at_3_std
value: -15.3483
- type: nauc_recall_at_3_diff1
value: 25.061899999999998
- type: nauc_recall_at_5_max
value: 17.8571
- type: nauc_recall_at_5_std
value: -11.274099999999999
- type: nauc_recall_at_5_diff1
value: 21.6014
- type: nauc_recall_at_10_max
value: 19.5196
- type: nauc_recall_at_10_std
value: -6.507899999999999
- type: nauc_recall_at_10_diff1
value: 17.893
- type: nauc_recall_at_20_max
value: 19.6178
- type: nauc_recall_at_20_std
value: -3.0103999999999997
- type: nauc_recall_at_20_diff1
value: 14.6408
- type: nauc_recall_at_100_max
value: 10.41
- type: nauc_recall_at_100_std
value: -0.7312
- type: nauc_recall_at_100_diff1
value: 10.3312
- type: nauc_recall_at_1000_max
value: 15.058
- type: nauc_recall_at_1000_std
value: 1.5328
- type: nauc_recall_at_1000_diff1
value: 13.9017
- type: nauc_precision_at_1_max
value: 9.8338
- type: nauc_precision_at_1_std
value: -12.548
- type: nauc_precision_at_1_diff1
value: 23.988100000000003
- type: nauc_precision_at_3_max
value: 12.634699999999999
- type: nauc_precision_at_3_std
value: -16.3304
- type: nauc_precision_at_3_diff1
value: 22.9192
- type: nauc_precision_at_5_max
value: 12.7579
- type: nauc_precision_at_5_std
value: -11.520199999999999
- type: nauc_precision_at_5_diff1
value: 17.8422
- type: nauc_precision_at_10_max
value: 15.9994
- type: nauc_precision_at_10_std
value: -6.447700000000001
- type: nauc_precision_at_10_diff1
value: 15.634799999999998
- type: nauc_precision_at_20_max
value: 16.1337
- type: nauc_precision_at_20_std
value: -3.8893999999999997
- type: nauc_precision_at_20_diff1
value: 11.8299
- type: nauc_precision_at_100_max
value: 7.0385
- type: nauc_precision_at_100_std
value: -2.4169
- type: nauc_precision_at_100_diff1
value: 7.9619
- type: nauc_precision_at_1000_max
value: 11.1822
- type: nauc_precision_at_1000_std
value: -0.7087
- type: nauc_precision_at_1000_diff1
value: 11.1584
- type: nauc_mrr_at_1_max
value: 9.8338
- type: nauc_mrr_at_1_std
value: -12.548
- type: nauc_mrr_at_1_diff1
value: 23.988100000000003
- type: nauc_mrr_at_3_max
value: 11.2985
- type: nauc_mrr_at_3_std
value: -14.4349
- type: nauc_mrr_at_3_diff1
value: 23.0904
- type: nauc_mrr_at_5_max
value: 11.9144
- type: nauc_mrr_at_5_std
value: -12.544
- type: nauc_mrr_at_5_diff1
value: 21.580099999999998
- type: nauc_mrr_at_10_max
value: 12.802299999999999
- type: nauc_mrr_at_10_std
value: -11.1495
- type: nauc_mrr_at_10_diff1
value: 20.1189
- type: nauc_mrr_at_20_max
value: 13.0409
- type: nauc_mrr_at_20_std
value: -10.516399999999999
- type: nauc_mrr_at_20_diff1
value: 19.3462
- type: nauc_mrr_at_100_max
value: 12.0976
- type: nauc_mrr_at_100_std
value: -10.1146
- type: nauc_mrr_at_100_diff1
value: 18.3944
- type: nauc_mrr_at_1000_max
value: 12.155100000000001
- type: nauc_mrr_at_1000_std
value: -9.9877
- type: nauc_mrr_at_1000_diff1
value: 18.390500000000003
- type: main_score
value: 3.9469999999999996
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (default)
type: sadeem-ai/sadeem-ar-eval-retrieval-questions
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: ndcg_at_1
value: 19.435
- type: ndcg_at_3
value: 42.789
- type: ndcg_at_5
value: 44.798
- type: ndcg_at_10
value: 46.705999999999996
- type: ndcg_at_20
value: 48.193000000000005
- type: ndcg_at_100
value: 49.882
- type: ndcg_at_1000
value: 50.924
- type: map_at_1
value: 19.435
- type: map_at_3
value: 36.596000000000004
- type: map_at_5
value: 37.721
- type: map_at_10
value: 38.521
- type: map_at_20
value: 38.934999999999995
- type: map_at_100
value: 39.169
- type: map_at_1000
value: 39.205
- type: recall_at_1
value: 19.435
- type: recall_at_3
value: 60.89
- type: recall_at_5
value: 65.725
- type: recall_at_10
value: 71.565
- type: recall_at_20
value: 77.405
- type: recall_at_100
value: 86.50099999999999
- type: recall_at_1000
value: 94.926
- type: precision_at_1
value: 19.435
- type: precision_at_3
value: 20.297
- type: precision_at_5
value: 13.145000000000001
- type: precision_at_10
value: 7.156999999999999
- type: precision_at_20
value: 3.8699999999999997
- type: precision_at_100
value: 0.865
- type: precision_at_1000
value: 0.095
- type: mrr_at_1
value: 17.8076
- type: mrr_at_3
value: 35.4875
- type: mrr_at_5
value: 36.78
- type: mrr_at_10
value: 37.5405
- type: mrr_at_20
value: 37.966
- type: mrr_at_100
value: 38.1923
- type: mrr_at_1000
value: 38.2282
- type: nauc_ndcg_at_1_max
value: 33.4563
- type: nauc_ndcg_at_1_std
value: 14.063300000000002
- type: nauc_ndcg_at_1_diff1
value: -29.665999999999997
- type: nauc_ndcg_at_3_max
value: 55.5122
- type: nauc_ndcg_at_3_std
value: 23.3885
- type: nauc_ndcg_at_3_diff1
value: -60.501099999999994
- type: nauc_ndcg_at_5_max
value: 54.832499999999996
- type: nauc_ndcg_at_5_std
value: 23.6066
- type: nauc_ndcg_at_5_diff1
value: -57.5511
- type: nauc_ndcg_at_10_max
value: 54.089600000000004
- type: nauc_ndcg_at_10_std
value: 23.9497
- type: nauc_ndcg_at_10_diff1
value: -55.457699999999996
- type: nauc_ndcg_at_20_max
value: 53.3345
- type: nauc_ndcg_at_20_std
value: 24.313399999999998
- type: nauc_ndcg_at_20_diff1
value: -54.1937
- type: nauc_ndcg_at_100_max
value: 52.2829
- type: nauc_ndcg_at_100_std
value: 24.3924
- type: nauc_ndcg_at_100_diff1
value: -52.9938
- type: nauc_ndcg_at_1000_max
value: 51.5458
- type: nauc_ndcg_at_1000_std
value: 23.4862
- type: nauc_ndcg_at_1000_diff1
value: -51.9041
- type: nauc_map_at_1_max
value: 33.4563
- type: nauc_map_at_1_std
value: 14.063300000000002
- type: nauc_map_at_1_diff1
value: -29.665999999999997
- type: nauc_map_at_3_max
value: 49.4643
- type: nauc_map_at_3_std
value: 20.686
- type: nauc_map_at_3_diff1
value: -51.4965
- type: nauc_map_at_5_max
value: 48.976
- type: nauc_map_at_5_std
value: 20.7495
- type: nauc_map_at_5_diff1
value: -49.645
- type: nauc_map_at_10_max
value: 48.5698
- type: nauc_map_at_10_std
value: 20.8694
- type: nauc_map_at_10_diff1
value: -48.673100000000005
- type: nauc_map_at_20_max
value: 48.3171
- type: nauc_map_at_20_std
value: 20.951900000000002
- type: nauc_map_at_20_diff1
value: -48.2722
- type: nauc_map_at_100_max
value: 48.1488
- type: nauc_map_at_100_std
value: 20.9507
- type: nauc_map_at_100_diff1
value: -48.0933
- type: nauc_map_at_1000_max
value: 48.1232
- type: nauc_map_at_1000_std
value: 20.9226
- type: nauc_map_at_1000_diff1
value: -48.0486
- type: nauc_recall_at_1_max
value: 33.4563
- type: nauc_recall_at_1_std
value: 14.063300000000002
- type: nauc_recall_at_1_diff1
value: -29.665999999999997
- type: nauc_recall_at_3_max
value: 73.1441
- type: nauc_recall_at_3_std
value: 31.3154
- type: nauc_recall_at_3_diff1
value: -86.93469999999999
- type: nauc_recall_at_5_max
value: 73.0428
- type: nauc_recall_at_5_std
value: 32.6181
- type: nauc_recall_at_5_diff1
value: -82.15289999999999
- type: nauc_recall_at_10_max
value: 73.0875
- type: nauc_recall_at_10_std
value: 34.933
- type: nauc_recall_at_10_diff1
value: -78.28
- type: nauc_recall_at_20_max
value: 73.03150000000001
- type: nauc_recall_at_20_std
value: 38.8894
- type: nauc_recall_at_20_diff1
value: -76.3884
- type: nauc_recall_at_100_max
value: 73.2723
- type: nauc_recall_at_100_std
value: 47.7568
- type: nauc_recall_at_100_diff1
value: -75.98169999999999
- type: nauc_recall_at_1000_max
value: 76.5266
- type: nauc_recall_at_1000_std
value: 47.3315
- type: nauc_recall_at_1000_diff1
value: -70.95139999999999
- type: nauc_precision_at_1_max
value: 33.4563
- type: nauc_precision_at_1_std
value: 14.063300000000002
- type: nauc_precision_at_1_diff1
value: -29.665999999999997
- type: nauc_precision_at_3_max
value: 73.1441
- type: nauc_precision_at_3_std
value: 31.3154
- type: nauc_precision_at_3_diff1
value: -86.93469999999999
- type: nauc_precision_at_5_max
value: 73.0428
- type: nauc_precision_at_5_std
value: 32.6181
- type: nauc_precision_at_5_diff1
value: -82.15289999999999
- type: nauc_precision_at_10_max
value: 73.0875
- type: nauc_precision_at_10_std
value: 34.933
- type: nauc_precision_at_10_diff1
value: -78.28
- type: nauc_precision_at_20_max
value: 73.03150000000001
- type: nauc_precision_at_20_std
value: 38.8894
- type: nauc_precision_at_20_diff1
value: -76.3884
- type: nauc_precision_at_100_max
value: 73.2723
- type: nauc_precision_at_100_std
value: 47.7568
- type: nauc_precision_at_100_diff1
value: -75.98169999999999
- type: nauc_precision_at_1000_max
value: 76.5266
- type: nauc_precision_at_1000_std
value: 47.3315
- type: nauc_precision_at_1000_diff1
value: -70.95139999999999
- type: nauc_mrr_at_1_max
value: 28.7221
- type: nauc_mrr_at_1_std
value: 11.3037
- type: nauc_mrr_at_1_diff1
value: -36.5891
- type: nauc_mrr_at_3_max
value: 47.3382
- type: nauc_mrr_at_3_std
value: 19.6286
- type: nauc_mrr_at_3_diff1
value: -57.08689999999999
- type: nauc_mrr_at_5_max
value: 46.6486
- type: nauc_mrr_at_5_std
value: 19.6178
- type: nauc_mrr_at_5_diff1
value: -55.2681
- type: nauc_mrr_at_10_max
value: 46.0209
- type: nauc_mrr_at_10_std
value: 19.5032
- type: nauc_mrr_at_10_diff1
value: -54.3868
- type: nauc_mrr_at_20_max
value: 45.729
- type: nauc_mrr_at_20_std
value: 19.4986
- type: nauc_mrr_at_20_diff1
value: -53.967699999999994
- type: nauc_mrr_at_100_max
value: 45.5478
- type: nauc_mrr_at_100_std
value: 19.484299999999998
- type: nauc_mrr_at_100_diff1
value: -53.8288
- type: nauc_mrr_at_1000_max
value: 45.5182
- type: nauc_mrr_at_1000_std
value: 19.453400000000002
- type: nauc_mrr_at_1000_diff1
value: -53.7893
- type: main_score
value: 46.705999999999996
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-ara)
type: jinaai/xpqa
config: ara-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_3
value: 20.547
- type: ndcg_at_5
value: 21.232
- type: ndcg_at_10
value: 23.518
- type: ndcg_at_20
value: 25.659
- type: ndcg_at_100
value: 29.643000000000004
- type: ndcg_at_1000
value: 34.81
- type: map_at_1
value: 10.544
- type: map_at_3
value: 16.2
- type: map_at_5
value: 17.743000000000002
- type: map_at_10
value: 18.951
- type: map_at_20
value: 19.704
- type: map_at_100
value: 20.355
- type: map_at_1000
value: 20.569000000000003
- type: recall_at_1
value: 10.544
- type: recall_at_3
value: 19.32
- type: recall_at_5
value: 23.355999999999998
- type: recall_at_10
value: 28.951
- type: recall_at_20
value: 35.878
- type: recall_at_100
value: 54.496
- type: recall_at_1000
value: 90.958
- type: precision_at_1
value: 20.8
- type: precision_at_3
value: 14.133000000000001
- type: precision_at_5
value: 10.453
- type: precision_at_10
value: 6.52
- type: precision_at_20
value: 4.0
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.186
- type: mrr_at_1
value: 20.8
- type: mrr_at_3
value: 24.8444
- type: mrr_at_5
value: 25.7911
- type: mrr_at_10
value: 26.5573
- type: mrr_at_20
value: 27.030500000000004
- type: mrr_at_100
value: 27.4134
- type: mrr_at_1000
value: 27.528799999999997
- type: nauc_ndcg_at_1_max
value: 31.7051
- type: nauc_ndcg_at_1_std
value: 1.2411999999999999
- type: nauc_ndcg_at_1_diff1
value: 33.0747
- type: nauc_ndcg_at_3_max
value: 30.142400000000002
- type: nauc_ndcg_at_3_std
value: -0.9313999999999999
- type: nauc_ndcg_at_3_diff1
value: 26.7065
- type: nauc_ndcg_at_5_max
value: 29.7749
- type: nauc_ndcg_at_5_std
value: 0.0249
- type: nauc_ndcg_at_5_diff1
value: 26.8829
- type: nauc_ndcg_at_10_max
value: 30.777500000000003
- type: nauc_ndcg_at_10_std
value: 0.7138
- type: nauc_ndcg_at_10_diff1
value: 26.270599999999998
- type: nauc_ndcg_at_20_max
value: 30.8149
- type: nauc_ndcg_at_20_std
value: 0.7107
- type: nauc_ndcg_at_20_diff1
value: 26.0781
- type: nauc_ndcg_at_100_max
value: 30.1661
- type: nauc_ndcg_at_100_std
value: 1.4445
- type: nauc_ndcg_at_100_diff1
value: 25.7807
- type: nauc_ndcg_at_1000_max
value: 31.0257
- type: nauc_ndcg_at_1000_std
value: 1.8606999999999998
- type: nauc_ndcg_at_1000_diff1
value: 27.2222
- type: nauc_map_at_1_max
value: 17.7301
- type: nauc_map_at_1_std
value: -3.6554999999999995
- type: nauc_map_at_1_diff1
value: 31.9805
- type: nauc_map_at_3_max
value: 27.411400000000004
- type: nauc_map_at_3_std
value: -2.1001
- type: nauc_map_at_3_diff1
value: 26.7978
- type: nauc_map_at_5_max
value: 28.4826
- type: nauc_map_at_5_std
value: -1.5623
- type: nauc_map_at_5_diff1
value: 26.6386
- type: nauc_map_at_10_max
value: 29.229300000000002
- type: nauc_map_at_10_std
value: -1.2293
- type: nauc_map_at_10_diff1
value: 26.287
- type: nauc_map_at_20_max
value: 29.4007
- type: nauc_map_at_20_std
value: -1.0069
- type: nauc_map_at_20_diff1
value: 26.114900000000002
- type: nauc_map_at_100_max
value: 29.5016
- type: nauc_map_at_100_std
value: -0.8401000000000001
- type: nauc_map_at_100_diff1
value: 26.247300000000003
- type: nauc_map_at_1000_max
value: 29.5489
- type: nauc_map_at_1000_std
value: -0.762
- type: nauc_map_at_1000_diff1
value: 26.3015
- type: nauc_recall_at_1_max
value: 17.7301
- type: nauc_recall_at_1_std
value: -3.6554999999999995
- type: nauc_recall_at_1_diff1
value: 31.9805
- type: nauc_recall_at_3_max
value: 26.789099999999998
- type: nauc_recall_at_3_std
value: -1.087
- type: nauc_recall_at_3_diff1
value: 22.7132
- type: nauc_recall_at_5_max
value: 27.6821
- type: nauc_recall_at_5_std
value: 1.043
- type: nauc_recall_at_5_diff1
value: 23.6854
- type: nauc_recall_at_10_max
value: 28.6304
- type: nauc_recall_at_10_std
value: 1.8037
- type: nauc_recall_at_10_diff1
value: 21.7246
- type: nauc_recall_at_20_max
value: 27.939199999999996
- type: nauc_recall_at_20_std
value: 0.9745
- type: nauc_recall_at_20_diff1
value: 20.9084
- type: nauc_recall_at_100_max
value: 23.5267
- type: nauc_recall_at_100_std
value: 3.2817
- type: nauc_recall_at_100_diff1
value: 17.907
- type: nauc_recall_at_1000_max
value: 35.5056
- type: nauc_recall_at_1000_std
value: 8.5216
- type: nauc_recall_at_1000_diff1
value: 36.6571
- type: nauc_precision_at_1_max
value: 31.7051
- type: nauc_precision_at_1_std
value: 1.2411999999999999
- type: nauc_precision_at_1_diff1
value: 33.0747
- type: nauc_precision_at_3_max
value: 38.2081
- type: nauc_precision_at_3_std
value: 1.3497000000000001
- type: nauc_precision_at_3_diff1
value: 22.3155
- type: nauc_precision_at_5_max
value: 38.367200000000004
- type: nauc_precision_at_5_std
value: 2.781
- type: nauc_precision_at_5_diff1
value: 21.5532
- type: nauc_precision_at_10_max
value: 37.7538
- type: nauc_precision_at_10_std
value: 4.7659
- type: nauc_precision_at_10_diff1
value: 19.6003
- type: nauc_precision_at_20_max
value: 35.1427
- type: nauc_precision_at_20_std
value: 5.5358
- type: nauc_precision_at_20_diff1
value: 17.808
- type: nauc_precision_at_100_max
value: 29.7634
- type: nauc_precision_at_100_std
value: 7.9015
- type: nauc_precision_at_100_diff1
value: 14.9111
- type: nauc_precision_at_1000_max
value: 21.906100000000002
- type: nauc_precision_at_1000_std
value: 8.9498
- type: nauc_precision_at_1000_diff1
value: 12.1544
- type: nauc_mrr_at_1_max
value: 31.7051
- type: nauc_mrr_at_1_std
value: 1.2411999999999999
- type: nauc_mrr_at_1_diff1
value: 33.0747
- type: nauc_mrr_at_3_max
value: 31.278200000000002
- type: nauc_mrr_at_3_std
value: 1.3494000000000002
- type: nauc_mrr_at_3_diff1
value: 29.066599999999998
- type: nauc_mrr_at_5_max
value: 31.5683
- type: nauc_mrr_at_5_std
value: 1.9106
- type: nauc_mrr_at_5_diff1
value: 29.5798
- type: nauc_mrr_at_10_max
value: 31.744600000000002
- type: nauc_mrr_at_10_std
value: 2.4455999999999998
- type: nauc_mrr_at_10_diff1
value: 29.1437
- type: nauc_mrr_at_20_max
value: 31.5781
- type: nauc_mrr_at_20_std
value: 2.2138
- type: nauc_mrr_at_20_diff1
value: 29.279899999999998
- type: nauc_mrr_at_100_max
value: 31.435000000000002
- type: nauc_mrr_at_100_std
value: 2.2043
- type: nauc_mrr_at_100_diff1
value: 29.216199999999997
- type: nauc_mrr_at_1000_max
value: 31.465799999999998
- type: nauc_mrr_at_1000_std
value: 2.2215
- type: nauc_mrr_at_1000_diff1
value: 29.2512
- type: main_score
value: 23.518
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (eng-ara)
type: jinaai/xpqa
config: eng-ara
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 1.2
- type: ndcg_at_3
value: 1.1860000000000002
- type: ndcg_at_5
value: 1.3050000000000002
- type: ndcg_at_10
value: 1.6969999999999998
- type: ndcg_at_20
value: 2.044
- type: ndcg_at_100
value: 3.5069999999999997
- type: ndcg_at_1000
value: 11.62
- type: map_at_1
value: 0.656
- type: map_at_3
value: 0.903
- type: map_at_5
value: 1.051
- type: map_at_10
value: 1.189
- type: map_at_20
value: 1.2850000000000001
- type: map_at_100
value: 1.452
- type: map_at_1000
value: 1.6729999999999998
- type: recall_at_1
value: 0.656
- type: recall_at_3
value: 1.0290000000000001
- type: recall_at_5
value: 1.46
- type: recall_at_10
value: 2.478
- type: recall_at_20
value: 3.6639999999999997
- type: recall_at_100
value: 10.453
- type: recall_at_1000
value: 68.58
- type: precision_at_1
value: 1.2
- type: precision_at_3
value: 0.844
- type: precision_at_5
value: 0.6930000000000001
- type: precision_at_10
value: 0.573
- type: precision_at_20
value: 0.393
- type: precision_at_100
value: 0.22399999999999998
- type: precision_at_1000
value: 0.147
- type: mrr_at_1
value: 1.2
- type: mrr_at_3
value: 1.5778
- type: mrr_at_5
value: 1.6978
- type: mrr_at_10
value: 1.9314999999999998
- type: mrr_at_20
value: 2.0536
- type: mrr_at_100
value: 2.2948
- type: mrr_at_1000
value: 2.4878
- type: nauc_ndcg_at_1_max
value: 74.081
- type: nauc_ndcg_at_1_std
value: 5.8313
- type: nauc_ndcg_at_1_diff1
value: 62.427299999999995
- type: nauc_ndcg_at_3_max
value: 65.3629
- type: nauc_ndcg_at_3_std
value: 6.7885
- type: nauc_ndcg_at_3_diff1
value: 54.3825
- type: nauc_ndcg_at_5_max
value: 63.497099999999996
- type: nauc_ndcg_at_5_std
value: 7.2825
- type: nauc_ndcg_at_5_diff1
value: 49.7187
- type: nauc_ndcg_at_10_max
value: 52.3784
- type: nauc_ndcg_at_10_std
value: 3.5996
- type: nauc_ndcg_at_10_diff1
value: 38.3057
- type: nauc_ndcg_at_20_max
value: 47.599799999999995
- type: nauc_ndcg_at_20_std
value: 2.8116
- type: nauc_ndcg_at_20_diff1
value: 35.433
- type: nauc_ndcg_at_100_max
value: 33.6852
- type: nauc_ndcg_at_100_std
value: 4.1317
- type: nauc_ndcg_at_100_diff1
value: 21.5679
- type: nauc_ndcg_at_1000_max
value: 24.516
- type: nauc_ndcg_at_1000_std
value: 5.9024
- type: nauc_ndcg_at_1000_diff1
value: 15.1338
- type: nauc_map_at_1_max
value: 85.331
- type: nauc_map_at_1_std
value: 18.3235
- type: nauc_map_at_1_diff1
value: 80.762
- type: nauc_map_at_3_max
value: 75.1557
- type: nauc_map_at_3_std
value: 11.3855
- type: nauc_map_at_3_diff1
value: 69.277
- type: nauc_map_at_5_max
value: 70.8756
- type: nauc_map_at_5_std
value: 8.223700000000001
- type: nauc_map_at_5_diff1
value: 61.6509
- type: nauc_map_at_10_max
value: 64.0045
- type: nauc_map_at_10_std
value: 6.1125
- type: nauc_map_at_10_diff1
value: 54.5543
- type: nauc_map_at_20_max
value: 61.04619999999999
- type: nauc_map_at_20_std
value: 5.5213
- type: nauc_map_at_20_diff1
value: 52.05309999999999
- type: nauc_map_at_100_max
value: 55.69
- type: nauc_map_at_100_std
value: 5.2997000000000005
- type: nauc_map_at_100_diff1
value: 46.5183
- type: nauc_map_at_1000_max
value: 53.2733
- type: nauc_map_at_1000_std
value: 5.3787
- type: nauc_map_at_1000_diff1
value: 44.2553
- type: nauc_recall_at_1_max
value: 85.331
- type: nauc_recall_at_1_std
value: 18.3235
- type: nauc_recall_at_1_diff1
value: 80.762
- type: nauc_recall_at_3_max
value: 68.1551
- type: nauc_recall_at_3_std
value: 12.2398
- type: nauc_recall_at_3_diff1
value: 60.7436
- type: nauc_recall_at_5_max
value: 62.2638
- type: nauc_recall_at_5_std
value: 8.578
- type: nauc_recall_at_5_diff1
value: 42.3461
- type: nauc_recall_at_10_max
value: 42.8151
- type: nauc_recall_at_10_std
value: 1.034
- type: nauc_recall_at_10_diff1
value: 23.8109
- type: nauc_recall_at_20_max
value: 36.9734
- type: nauc_recall_at_20_std
value: 0.9624
- type: nauc_recall_at_20_diff1
value: 22.0584
- type: nauc_recall_at_100_max
value: 21.0573
- type: nauc_recall_at_100_std
value: 3.7708
- type: nauc_recall_at_100_diff1
value: 7.7184
- type: nauc_recall_at_1000_max
value: 8.8652
- type: nauc_recall_at_1000_std
value: 5.3474
- type: nauc_recall_at_1000_diff1
value: 7.3409
- type: nauc_precision_at_1_max
value: 74.081
- type: nauc_precision_at_1_std
value: 5.8313
- type: nauc_precision_at_1_diff1
value: 62.427299999999995
- type: nauc_precision_at_3_max
value: 51.821
- type: nauc_precision_at_3_std
value: -1.3345
- type: nauc_precision_at_3_diff1
value: 37.6809
- type: nauc_precision_at_5_max
value: 45.9495
- type: nauc_precision_at_5_std
value: -1.6027
- type: nauc_precision_at_5_diff1
value: 30.794
- type: nauc_precision_at_10_max
value: 34.2635
- type: nauc_precision_at_10_std
value: -4.0278
- type: nauc_precision_at_10_diff1
value: 19.223000000000003
- type: nauc_precision_at_20_max
value: 30.588500000000003
- type: nauc_precision_at_20_std
value: -5.0488
- type: nauc_precision_at_20_diff1
value: 20.971999999999998
- type: nauc_precision_at_100_max
value: 18.7883
- type: nauc_precision_at_100_std
value: 3.4913
- type: nauc_precision_at_100_diff1
value: 9.4293
- type: nauc_precision_at_1000_max
value: 5.8584
- type: nauc_precision_at_1000_std
value: 6.8013
- type: nauc_precision_at_1000_diff1
value: -2.4122
- type: nauc_mrr_at_1_max
value: 74.081
- type: nauc_mrr_at_1_std
value: 5.8313
- type: nauc_mrr_at_1_diff1
value: 62.427299999999995
- type: nauc_mrr_at_3_max
value: 58.44819999999999
- type: nauc_mrr_at_3_std
value: 3.6037
- type: nauc_mrr_at_3_diff1
value: 42.664699999999996
- type: nauc_mrr_at_5_max
value: 56.606100000000005
- type: nauc_mrr_at_5_std
value: 4.3769
- type: nauc_mrr_at_5_diff1
value: 39.446799999999996
- type: nauc_mrr_at_10_max
value: 52.283699999999996
- type: nauc_mrr_at_10_std
value: 3.3348000000000004
- type: nauc_mrr_at_10_diff1
value: 35.186099999999996
- type: nauc_mrr_at_20_max
value: 50.6598
- type: nauc_mrr_at_20_std
value: 3.1269
- type: nauc_mrr_at_20_diff1
value: 34.930099999999996
- type: nauc_mrr_at_100_max
value: 46.7037
- type: nauc_mrr_at_100_std
value: 3.2654
- type: nauc_mrr_at_100_diff1
value: 31.1309
- type: nauc_mrr_at_1000_max
value: 46.1128
- type: nauc_mrr_at_1000_std
value: 3.3853
- type: nauc_mrr_at_1000_diff1
value: 30.3609
- type: main_score
value: 1.6969999999999998
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (ara-eng)
type: jinaai/xpqa
config: ara-eng
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: ndcg_at_1
value: 1.617
- type: ndcg_at_3
value: 1.8159999999999998
- type: ndcg_at_5
value: 1.9869999999999999
- type: ndcg_at_10
value: 2.394
- type: ndcg_at_20
value: 2.724
- type: ndcg_at_100
value: 4.2909999999999995
- type: ndcg_at_1000
value: 12.857
- type: map_at_1
value: 0.903
- type: map_at_3
value: 1.421
- type: map_at_5
value: 1.5610000000000002
- type: map_at_10
value: 1.7420000000000002
- type: map_at_20
value: 1.828
- type: map_at_100
value: 2.016
- type: map_at_1000
value: 2.259
- type: recall_at_1
value: 0.903
- type: recall_at_3
value: 1.923
- type: recall_at_5
value: 2.4330000000000003
- type: recall_at_10
value: 3.4819999999999998
- type: recall_at_20
value: 4.5440000000000005
- type: recall_at_100
value: 11.846
- type: recall_at_1000
value: 74.371
- type: precision_at_1
value: 1.617
- type: precision_at_3
value: 1.168
- type: precision_at_5
value: 0.889
- type: precision_at_10
value: 0.647
- type: precision_at_20
value: 0.438
- type: precision_at_100
value: 0.244
- type: precision_at_1000
value: 0.146
- type: mrr_at_1
value: 1.6173
- type: mrr_at_3
value: 2.2686
- type: mrr_at_5
value: 2.3899
- type: mrr_at_10
value: 2.5806
- type: mrr_at_20
value: 2.7121
- type: mrr_at_100
value: 2.9324
- type: mrr_at_1000
value: 3.1441
- type: nauc_ndcg_at_1_max
value: 41.4733
- type: nauc_ndcg_at_1_std
value: 34.5204
- type: nauc_ndcg_at_1_diff1
value: 38.8662
- type: nauc_ndcg_at_3_max
value: 41.3135
- type: nauc_ndcg_at_3_std
value: 40.0385
- type: nauc_ndcg_at_3_diff1
value: 36.750899999999994
- type: nauc_ndcg_at_5_max
value: 42.9281
- type: nauc_ndcg_at_5_std
value: 39.9347
- type: nauc_ndcg_at_5_diff1
value: 35.3783
- type: nauc_ndcg_at_10_max
value: 42.743900000000004
- type: nauc_ndcg_at_10_std
value: 41.6663
- type: nauc_ndcg_at_10_diff1
value: 31.0463
- type: nauc_ndcg_at_20_max
value: 43.5237
- type: nauc_ndcg_at_20_std
value: 39.6809
- type: nauc_ndcg_at_20_diff1
value: 32.651
- type: nauc_ndcg_at_100_max
value: 33.3655
- type: nauc_ndcg_at_100_std
value: 32.0311
- type: nauc_ndcg_at_100_diff1
value: 28.723399999999998
- type: nauc_ndcg_at_1000_max
value: 31.1311
- type: nauc_ndcg_at_1000_std
value: 28.838900000000002
- type: nauc_ndcg_at_1000_diff1
value: 26.2104
- type: nauc_map_at_1_max
value: 34.202
- type: nauc_map_at_1_std
value: 33.9772
- type: nauc_map_at_1_diff1
value: 44.6104
- type: nauc_map_at_3_max
value: 39.6785
- type: nauc_map_at_3_std
value: 39.4152
- type: nauc_map_at_3_diff1
value: 37.6022
- type: nauc_map_at_5_max
value: 41.2645
- type: nauc_map_at_5_std
value: 38.6109
- type: nauc_map_at_5_diff1
value: 37.3159
- type: nauc_map_at_10_max
value: 41.9172
- type: nauc_map_at_10_std
value: 40.3848
- type: nauc_map_at_10_diff1
value: 35.2489
- type: nauc_map_at_20_max
value: 42.0995
- type: nauc_map_at_20_std
value: 39.6004
- type: nauc_map_at_20_diff1
value: 35.4418
- type: nauc_map_at_100_max
value: 39.7447
- type: nauc_map_at_100_std
value: 37.819599999999994
- type: nauc_map_at_100_diff1
value: 34.1062
- type: nauc_map_at_1000_max
value: 39.2917
- type: nauc_map_at_1000_std
value: 37.1777
- type: nauc_map_at_1000_diff1
value: 33.6102
- type: nauc_recall_at_1_max
value: 34.202
- type: nauc_recall_at_1_std
value: 33.9772
- type: nauc_recall_at_1_diff1
value: 44.6104
- type: nauc_recall_at_3_max
value: 39.048
- type: nauc_recall_at_3_std
value: 39.7222
- type: nauc_recall_at_3_diff1
value: 33.0168
- type: nauc_recall_at_5_max
value: 42.954100000000004
- type: nauc_recall_at_5_std
value: 39.4149
- type: nauc_recall_at_5_diff1
value: 31.6088
- type: nauc_recall_at_10_max
value: 41.2203
- type: nauc_recall_at_10_std
value: 41.7063
- type: nauc_recall_at_10_diff1
value: 24.0288
- type: nauc_recall_at_20_max
value: 44.0757
- type: nauc_recall_at_20_std
value: 38.6803
- type: nauc_recall_at_20_diff1
value: 29.157899999999998
- type: nauc_recall_at_100_max
value: 24.6526
- type: nauc_recall_at_100_std
value: 24.0066
- type: nauc_recall_at_100_diff1
value: 23.8347
- type: nauc_recall_at_1000_max
value: 22.596
- type: nauc_recall_at_1000_std
value: 21.290799999999997
- type: nauc_recall_at_1000_diff1
value: 21.012700000000002
- type: nauc_precision_at_1_max
value: 41.4733
- type: nauc_precision_at_1_std
value: 34.5204
- type: nauc_precision_at_1_diff1
value: 38.8662
- type: nauc_precision_at_3_max
value: 48.1229
- type: nauc_precision_at_3_std
value: 47.712500000000006
- type: nauc_precision_at_3_diff1
value: 35.7151
- type: nauc_precision_at_5_max
value: 50.8463
- type: nauc_precision_at_5_std
value: 46.9867
- type: nauc_precision_at_5_diff1
value: 33.0426
- type: nauc_precision_at_10_max
value: 50.7306
- type: nauc_precision_at_10_std
value: 49.5174
- type: nauc_precision_at_10_diff1
value: 28.2889
- type: nauc_precision_at_20_max
value: 49.6035
- type: nauc_precision_at_20_std
value: 42.9794
- type: nauc_precision_at_20_diff1
value: 32.3811
- type: nauc_precision_at_100_max
value: 30.7262
- type: nauc_precision_at_100_std
value: 29.2314
- type: nauc_precision_at_100_diff1
value: 25.7678
- type: nauc_precision_at_1000_max
value: 13.3632
- type: nauc_precision_at_1000_std
value: 11.4093
- type: nauc_precision_at_1000_diff1
value: 11.015
- type: nauc_mrr_at_1_max
value: 41.4733
- type: nauc_mrr_at_1_std
value: 34.5204
- type: nauc_mrr_at_1_diff1
value: 38.8662
- type: nauc_mrr_at_3_max
value: 43.217299999999994
- type: nauc_mrr_at_3_std
value: 39.5736
- type: nauc_mrr_at_3_diff1
value: 38.129999999999995
- type: nauc_mrr_at_5_max
value: 44.241
- type: nauc_mrr_at_5_std
value: 40.646100000000004
- type: nauc_mrr_at_5_diff1
value: 36.2331
- type: nauc_mrr_at_10_max
value: 43.6115
- type: nauc_mrr_at_10_std
value: 40.7157
- type: nauc_mrr_at_10_diff1
value: 33.1217
- type: nauc_mrr_at_20_max
value: 43.3382
- type: nauc_mrr_at_20_std
value: 39.4582
- type: nauc_mrr_at_20_diff1
value: 33.6253
- type: nauc_mrr_at_100_max
value: 40.780100000000004
- type: nauc_mrr_at_100_std
value: 37.9242
- type: nauc_mrr_at_100_diff1
value: 32.8418
- type: nauc_mrr_at_1000_max
value: 40.5963
- type: nauc_mrr_at_1000_std
value: 37.5467
- type: nauc_mrr_at_1000_diff1
value: 32.542
- type: main_score
value: 2.394
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 69.84925402371587
- type: cosine_spearman
value: 67.12261377163864
- type: euclidean_pearson
value: 68.77931734192
- type: euclidean_spearman
value: 67.10454107068325
- type: main_score
value: 67.12261377163864
- type: manhattan_pearson
value: 69.39988076793398
- type: manhattan_spearman
value: 67.68708446481159
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 72.71925116055804
- type: cosine_spearman
value: 68.9386835022992
- type: euclidean_pearson
value: 71.00708266525079
- type: euclidean_spearman
value: 69.07087906196487
- type: main_score
value: 68.9386835022992
- type: manhattan_pearson
value: 70.95266060047263
- type: manhattan_spearman
value: 69.11051988196195
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 71.67274455692545
- type: cosine_spearman
value: 68.71669873972587
- type: euclidean_pearson
value: 69.79037485042406
- type: euclidean_spearman
value: 68.80550150752252
- type: main_score
value: 68.71669873972587
- type: manhattan_pearson
value: 69.7571283034187
- type: manhattan_spearman
value: 68.58306466019968
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 54.172888286882504
- type: cosine_spearman
value: 56.04247097489131
- type: euclidean_pearson
value: 57.88587934777827
- type: euclidean_spearman
value: 57.6139294630564
- type: main_score
value: 56.04247097489131
- type: manhattan_pearson
value: 57.616116618991185
- type: manhattan_spearman
value: 57.23150380799801
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 59.58820914531488
- type: cosine_spearman
value: 58.80575077741524
- type: euclidean_pearson
value: 61.1884427988923
- type: euclidean_spearman
value: 60.661625936116124
- type: main_score
value: 58.80575077741524
- type: manhattan_pearson
value: 60.800157410891885
- type: manhattan_spearman
value: 60.29447727072491
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.45220638967554
- type: cosine_spearman
value: 73.74453589715445
- type: euclidean_pearson
value: 73.8887071337604
- type: euclidean_spearman
value: 73.51752094057372
- type: main_score
value: 73.74453589715445
- type: manhattan_pearson
value: 73.45961523235827
- type: manhattan_spearman
value: 73.07675481848841
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 66.84132105540075
- type: cosine_spearman
value: 68.24735989887876
- type: euclidean_pearson
value: 68.2712231484699
- type: euclidean_spearman
value: 68.02365271737838
- type: main_score
value: 68.24735989887876
- type: manhattan_pearson
value: 67.87379902773417
- type: manhattan_spearman
value: 67.65342499070456
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 79.2987412566616
- type: cosine_spearman
value: 79.93275889323859
- type: euclidean_pearson
value: 77.90301430319637
- type: euclidean_spearman
value: 79.12169562085792
- type: main_score
value: 79.93275889323859
- type: manhattan_pearson
value: 77.93298637610417
- type: manhattan_spearman
value: 79.38516109229111
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 46.955019830396445
- type: cosine_spearman
value: 52.44226852669887
- type: euclidean_pearson
value: 42.80891863181744
- type: euclidean_spearman
value: 53.175461247693704
- type: main_score
value: 52.44226852669887
- type: manhattan_pearson
value: 42.97005510727849
- type: manhattan_spearman
value: 53.158087426369825
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 66.99025999216197
- type: cosine_spearman
value: 67.56341643518167
- type: euclidean_pearson
value: 69.73441598964332
- type: euclidean_spearman
value: 68.72541136876826
- type: main_score
value: 67.56341643518167
- type: manhattan_pearson
value: 69.43492004000674
- type: manhattan_spearman
value: 68.39614969063062
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 30.13248188083236
- type: cosine_spearman
value: 28.78575545661001
- type: dot_pearson
value: 30.934754821379464
- type: dot_spearman
value: 29.730792596057093
- type: main_score
value: 28.78575545661001
- type: pearson
value: 30.13248188083236
- type: spearman
value: 28.78575545661001
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.66986244175229
name: Pearson Cosine
- type: spearman_cosine
value: 0.675651628513557
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6943200977280434
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6839707658313092
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6973190148612566
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6872926092972673
name: Spearman Euclidean
- type: pearson_dot
value: 0.5534197296097646
name: Pearson Dot
- type: spearman_dot
value: 0.5421965591416092
name: Spearman Dot
- type: pearson_max
value: 0.6973190148612566
name: Pearson Max
- type: spearman_max
value: 0.6872926092972673
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.6628171358537143
name: Pearson Cosine
- type: spearman_cosine
value: 0.670314701212355
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6916567677127377
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6815748132707206
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6948756461188812
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.685329042213794
name: Spearman Euclidean
- type: pearson_dot
value: 0.5229142840207227
name: Pearson Dot
- type: spearman_dot
value: 0.5113740757424073
name: Spearman Dot
- type: pearson_max
value: 0.6948756461188812
name: Pearson Max
- type: spearman_max
value: 0.685329042213794
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.6368313837029833
name: Pearson Cosine
- type: spearman_cosine
value: 0.6512526280069127
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6832129716443456
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.674638334774044
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6843664039671002
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6760040651639672
name: Spearman Euclidean
- type: pearson_dot
value: 0.4266095536126992
name: Pearson Dot
- type: spearman_dot
value: 0.4179376458107888
name: Spearman Dot
- type: pearson_max
value: 0.6843664039671002
name: Pearson Max
- type: spearman_max
value: 0.6760040651639672
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.6147896744901056
name: Pearson Cosine
- type: spearman_cosine
value: 0.6354730852658397
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6730782159165468
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6652649799789521
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.676407799774529
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6691409653459247
name: Spearman Euclidean
- type: pearson_dot
value: 0.35130869784942953
name: Pearson Dot
- type: spearman_dot
value: 0.3445374275232203
name: Spearman Dot
- type: pearson_max
value: 0.676407799774529
name: Pearson Max
- type: spearman_max
value: 0.6691409653459247
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.5789158725954748
name: Pearson Cosine
- type: spearman_cosine
value: 0.6081197115891086
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6578631744829946
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6518503436513217
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6629734628760299
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6570510967281272
name: Spearman Euclidean
- type: pearson_dot
value: 0.24034366392620327
name: Pearson Dot
- type: spearman_dot
value: 0.2331392769925126
name: Spearman Dot
- type: pearson_max
value: 0.6629734628760299
name: Pearson Max
- type: spearman_max
value: 0.6570510967281272
name: Spearman Max
---
# SentenceTransformer based on tomaarsen/mpnet-base-all-nli-triplet
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tomaarsen/mpnet-base-all-nli-triplet](https://huggingface.co/tomaarsen/mpnet-base-all-nli-triplet) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [tomaarsen/mpnet-base-all-nli-triplet](https://huggingface.co/tomaarsen/mpnet-base-all-nli-triplet) <!-- at revision e88732e5620f3592bf6566604be9a6a5cad814ec -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/mpnet-base-all-nli-triplet-Arabic-mpnet_base")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6699 |
| **spearman_cosine** | **0.6757** |
| pearson_manhattan | 0.6943 |
| spearman_manhattan | 0.684 |
| pearson_euclidean | 0.6973 |
| spearman_euclidean | 0.6873 |
| pearson_dot | 0.5534 |
| spearman_dot | 0.5422 |
| pearson_max | 0.6973 |
| spearman_max | 0.6873 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6628 |
| **spearman_cosine** | **0.6703** |
| pearson_manhattan | 0.6917 |
| spearman_manhattan | 0.6816 |
| pearson_euclidean | 0.6949 |
| spearman_euclidean | 0.6853 |
| pearson_dot | 0.5229 |
| spearman_dot | 0.5114 |
| pearson_max | 0.6949 |
| spearman_max | 0.6853 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6368 |
| **spearman_cosine** | **0.6513** |
| pearson_manhattan | 0.6832 |
| spearman_manhattan | 0.6746 |
| pearson_euclidean | 0.6844 |
| spearman_euclidean | 0.676 |
| pearson_dot | 0.4266 |
| spearman_dot | 0.4179 |
| pearson_max | 0.6844 |
| spearman_max | 0.676 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6148 |
| **spearman_cosine** | **0.6355** |
| pearson_manhattan | 0.6731 |
| spearman_manhattan | 0.6653 |
| pearson_euclidean | 0.6764 |
| spearman_euclidean | 0.6691 |
| pearson_dot | 0.3513 |
| spearman_dot | 0.3445 |
| pearson_max | 0.6764 |
| spearman_max | 0.6691 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.5789 |
| **spearman_cosine** | **0.6081** |
| pearson_manhattan | 0.6579 |
| spearman_manhattan | 0.6519 |
| pearson_euclidean | 0.663 |
| spearman_euclidean | 0.6571 |
| pearson_dot | 0.2403 |
| spearman_dot | 0.2331 |
| pearson_max | 0.663 |
| spearman_max | 0.6571 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 23.93 tokens</li><li>max: 155 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 29.62 tokens</li><li>max: 117 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 33.95 tokens</li><li>max: 149 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 49.5 tokens</li><li>max: 246 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 23.66 tokens</li><li>max: 103 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 25.33 tokens</li><li>max: 82 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:-----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.0229 | 200 | 21.5318 | - | - | - | - | - |
| 0.0459 | 400 | 17.2344 | - | - | - | - | - |
| 0.0688 | 600 | 15.393 | - | - | - | - | - |
| 0.0918 | 800 | 13.7897 | - | - | - | - | - |
| 0.1147 | 1000 | 13.534 | - | - | - | - | - |
| 0.1377 | 1200 | 12.2683 | - | - | - | - | - |
| 0.1606 | 1400 | 10.9271 | - | - | - | - | - |
| 0.1835 | 1600 | 11.071 | - | - | - | - | - |
| 0.2065 | 1800 | 10.0153 | - | - | - | - | - |
| 0.2294 | 2000 | 9.8463 | - | - | - | - | - |
| 0.2524 | 2200 | 10.0194 | - | - | - | - | - |
| 0.2753 | 2400 | 9.8371 | - | - | - | - | - |
| 0.2983 | 2600 | 9.6315 | - | - | - | - | - |
| 0.3212 | 2800 | 8.9858 | - | - | - | - | - |
| 0.3442 | 3000 | 9.1876 | - | - | - | - | - |
| 0.3671 | 3200 | 8.8028 | - | - | - | - | - |
| 0.3900 | 3400 | 8.6075 | - | - | - | - | - |
| 0.4130 | 3600 | 8.4285 | - | - | - | - | - |
| 0.4359 | 3800 | 8.1258 | - | - | - | - | - |
| 0.4589 | 4000 | 8.2508 | - | - | - | - | - |
| 0.4818 | 4200 | 7.8037 | - | - | - | - | - |
| 0.5048 | 4400 | 7.7133 | - | - | - | - | - |
| 0.5277 | 4600 | 7.5006 | - | - | - | - | - |
| 0.5506 | 4800 | 7.7025 | - | - | - | - | - |
| 0.5736 | 5000 | 7.7593 | - | - | - | - | - |
| 0.5965 | 5200 | 7.6305 | - | - | - | - | - |
| 0.6195 | 5400 | 7.7502 | - | - | - | - | - |
| 0.6424 | 5600 | 7.5624 | - | - | - | - | - |
| 0.6654 | 5800 | 7.5287 | - | - | - | - | - |
| 0.6883 | 6000 | 7.4261 | - | - | - | - | - |
| 0.7113 | 6200 | 7.239 | - | - | - | - | - |
| 0.7342 | 6400 | 7.1631 | - | - | - | - | - |
| 0.7571 | 6600 | 7.6865 | - | - | - | - | - |
| 0.7801 | 6800 | 7.6124 | - | - | - | - | - |
| 0.8030 | 7000 | 6.9936 | - | - | - | - | - |
| 0.8260 | 7200 | 6.7331 | - | - | - | - | - |
| 0.8489 | 7400 | 6.4542 | - | - | - | - | - |
| 0.8719 | 7600 | 6.1994 | - | - | - | - | - |
| 0.8948 | 7800 | 5.9798 | - | - | - | - | - |
| 0.9177 | 8000 | 5.7808 | - | - | - | - | - |
| 0.9407 | 8200 | 5.6952 | - | - | - | - | - |
| 0.9636 | 8400 | 5.5082 | - | - | - | - | - |
| 0.9866 | 8600 | 5.4421 | - | - | - | - | - |
| 1.0095 | 8800 | 3.0309 | - | - | - | - | - |
| 1.0026 | 9000 | 1.1835 | - | - | - | - | - |
| 1.0256 | 9200 | 8.1196 | - | - | - | - | - |
| 1.0485 | 9400 | 8.0326 | - | - | - | - | - |
| 1.0715 | 9600 | 8.5028 | - | - | - | - | - |
| 1.0944 | 9800 | 7.6923 | - | - | - | - | - |
| 1.1174 | 10000 | 8.029 | - | - | - | - | - |
| 1.1403 | 10200 | 7.5052 | - | - | - | - | - |
| 1.1632 | 10400 | 7.1177 | - | - | - | - | - |
| 1.1862 | 10600 | 6.9594 | - | - | - | - | - |
| 1.2091 | 10800 | 6.6662 | - | - | - | - | - |
| 1.2321 | 11000 | 6.6903 | - | - | - | - | - |
| 1.2550 | 11200 | 6.9523 | - | - | - | - | - |
| 1.2780 | 11400 | 6.676 | - | - | - | - | - |
| 1.3009 | 11600 | 6.7141 | - | - | - | - | - |
| 1.3238 | 11800 | 6.568 | - | - | - | - | - |
| 1.3468 | 12000 | 6.8938 | - | - | - | - | - |
| 1.3697 | 12200 | 6.3745 | - | - | - | - | - |
| 1.3927 | 12400 | 6.2513 | - | - | - | - | - |
| 1.4156 | 12600 | 6.2589 | - | - | - | - | - |
| 1.4386 | 12800 | 6.1388 | - | - | - | - | - |
| 1.4615 | 13000 | 6.1835 | - | - | - | - | - |
| 1.4845 | 13200 | 5.9004 | - | - | - | - | - |
| 1.5074 | 13400 | 5.7891 | - | - | - | - | - |
| 1.5303 | 13600 | 5.6184 | - | - | - | - | - |
| 1.5533 | 13800 | 5.9762 | - | - | - | - | - |
| 1.5762 | 14000 | 5.9737 | - | - | - | - | - |
| 1.5992 | 14200 | 5.8563 | - | - | - | - | - |
| 1.6221 | 14400 | 5.8904 | - | - | - | - | - |
| 1.6451 | 14600 | 5.8484 | - | - | - | - | - |
| 1.6680 | 14800 | 5.8906 | - | - | - | - | - |
| 1.6909 | 15000 | 5.7613 | - | - | - | - | - |
| 1.7139 | 15200 | 5.5744 | - | - | - | - | - |
| 1.7368 | 15400 | 5.6569 | - | - | - | - | - |
| 1.7598 | 15600 | 5.7439 | - | - | - | - | - |
| 1.7827 | 15800 | 5.5593 | - | - | - | - | - |
| 1.8057 | 16000 | 5.2935 | - | - | - | - | - |
| 1.8286 | 16200 | 5.088 | - | - | - | - | - |
| 1.8516 | 16400 | 5.0167 | - | - | - | - | - |
| 1.8745 | 16600 | 4.84 | - | - | - | - | - |
| 1.8974 | 16800 | 4.6731 | - | - | - | - | - |
| 1.9204 | 17000 | 4.6404 | - | - | - | - | - |
| 1.9433 | 17200 | 4.6413 | - | - | - | - | - |
| 1.9663 | 17400 | 4.4495 | - | - | - | - | - |
| 1.9892 | 17600 | 4.4262 | - | - | - | - | - |
| 2.0122 | 17800 | 2.01 | - | - | - | - | - |
| 2.0053 | 18000 | 1.8418 | - | - | - | - | - |
| 2.0282 | 18200 | 6.2714 | - | - | - | - | - |
| 2.0512 | 18400 | 6.1742 | - | - | - | - | - |
| 2.0741 | 18600 | 6.5996 | - | - | - | - | - |
| 2.0971 | 18800 | 6.0907 | - | - | - | - | - |
| 2.1200 | 19000 | 6.2418 | - | - | - | - | - |
| 2.1429 | 19200 | 5.7817 | - | - | - | - | - |
| 2.1659 | 19400 | 5.7073 | - | - | - | - | - |
| 2.1888 | 19600 | 5.2645 | - | - | - | - | - |
| 2.2118 | 19800 | 5.3451 | - | - | - | - | - |
| 2.2347 | 20000 | 5.2453 | - | - | - | - | - |
| 2.2577 | 20200 | 5.6161 | - | - | - | - | - |
| 2.2806 | 20400 | 5.2289 | - | - | - | - | - |
| 2.3035 | 20600 | 5.3888 | - | - | - | - | - |
| 2.3265 | 20800 | 5.2483 | - | - | - | - | - |
| 2.3494 | 21000 | 5.5791 | - | - | - | - | - |
| 2.3724 | 21200 | 5.1643 | - | - | - | - | - |
| 2.3953 | 21400 | 5.1231 | - | - | - | - | - |
| 2.4183 | 21600 | 5.1055 | - | - | - | - | - |
| 2.4412 | 21800 | 5.1778 | - | - | - | - | - |
| 2.4642 | 22000 | 5.0466 | - | - | - | - | - |
| 2.4871 | 22200 | 4.8321 | - | - | - | - | - |
| 2.5100 | 22400 | 4.7056 | - | - | - | - | - |
| 2.5330 | 22600 | 4.6858 | - | - | - | - | - |
| 2.5559 | 22800 | 4.9189 | - | - | - | - | - |
| 2.5789 | 23000 | 4.912 | - | - | - | - | - |
| 2.6018 | 23200 | 4.8289 | - | - | - | - | - |
| 2.6248 | 23400 | 4.8959 | - | - | - | - | - |
| 2.6477 | 23600 | 4.9441 | - | - | - | - | - |
| 2.6706 | 23800 | 4.9334 | - | - | - | - | - |
| 2.6936 | 24000 | 4.8328 | - | - | - | - | - |
| 2.7165 | 24200 | 4.601 | - | - | - | - | - |
| 2.7395 | 24400 | 4.834 | - | - | - | - | - |
| 2.7624 | 24600 | 5.152 | - | - | - | - | - |
| 2.7854 | 24800 | 4.9232 | - | - | - | - | - |
| 2.8083 | 25000 | 4.6556 | - | - | - | - | - |
| 2.8312 | 25200 | 4.6229 | - | - | - | - | - |
| 2.8542 | 25400 | 4.5768 | - | - | - | - | - |
| 2.8771 | 25600 | 4.3619 | - | - | - | - | - |
| 2.9001 | 25800 | 4.3608 | - | - | - | - | - |
| 2.9230 | 26000 | 4.2834 | - | - | - | - | - |
| 2.9403 | 26151 | - | 0.6355 | 0.6513 | 0.6703 | 0.6081 | 0.6757 |
</details>
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES"
] |
rjnClarke/BAAI-bge-large-en-v1.5-fine-tuned | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10359",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-large-en-v1.5",
"base_model:finetune:BAAI/bge-large-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T12:45:12 | 2024-08-06T12:46:32 | 50 | 0 | ---
base_model: BAAI/bge-large-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@3
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@200
- cosine_map@100
- dot_accuracy@3
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@200
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10359
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Cleopatra reacts to the news of Antony's death with a mixture of
sadness and resignation, contemplating her own mortality and the fickle nature
of life.
sentences:
- "Immortal longings in me. Now no more The juice of Egypt's grape shall moist\
\ this lip. Yare, yare, good Iras; quick. Methinks I hear Antony call. I\
\ see him rouse himself To praise my noble act. I hear him mock The luck\
\ of Caesar, which the gods give men To excuse their after wrath. Husband,\
\ I come. Now to that name my courage prove my title! I am fire and air;\
\ my other elements I give to baser life. So, have you done? Come then,\
\ and take the last warmth of my lips. Farewell, kind Charmian. Iras, long\
\ farewell. [Kisses them. IRAS falls and dies] \
\ Have I the aspic in my lips? Dost fall? If thus thou and nature can so gently\
\ part, The stroke of death is as a lover's pinch, Which hurts and is desir'd.\
\ Dost thou lie still? If thou vanishest, thou tell'st the world It is\
\ not worth leave-taking. CHARMIAN. Dissolve, thick cloud, and rain, that I may\
\ say The gods themselves do weep. CLEOPATRA. This proves me base.\n \
\ If she first meet the curled Antony,\n"
- "BURGUNDY. Warlike and martial Talbot, Burgundy\n Enshrines thee in his heart,\
\ and there erects Thy noble deeds as valour's monuments. TALBOT. Thanks,\
\ gentle Duke. But where is Pucelle now? I think her old familiar is asleep.\
\ Now where's the Bastard's braves, and Charles his gleeks? What, all amort?\
\ Rouen hangs her head for grief That such a valiant company are fled. Now\
\ will we take some order in the town, Placing therein some expert officers;\
\ And then depart to Paris to the King, For there young Henry with his nobles\
\ lie. BURGUNDY. What Lord Talbot pleaseth Burgundy. TALBOT. But yet, before\
\ we go, let's not forget The noble Duke of Bedford, late deceas'd, But\
\ see his exequies fulfill'd in Rouen. A braver soldier never couched lance,\
\ A gentler heart did never sway in court; But kings and mightiest potentates\
\ must die, For that's the end of human misery. Exeunt\n"
- "Your suffering in this dearth, you may as well\n Strike at the heaven with\
\ your staves as lift them Against the Roman state; whose course will on \
\ The way it takes, cracking ten thousand curbs Of more strong link asunder\
\ than can ever Appear in your impediment. For the dearth, The gods, not\
\ the patricians, make it, and Your knees to them, not arms, must help. Alack,\
\ You are transported by calamity Thither where more attends you; and you\
\ slander The helms o' th' state, who care for you like fathers, When you\
\ curse them as enemies. FIRST CITIZEN. Care for us! True, indeed! They ne'er\
\ car'd for us yet. Suffer us to famish, and their storehouses cramm'd with\
\ grain; make edicts for usury, to support usurers; repeal daily any wholesome\
\ act established against the rich, and provide more piercing statutes daily\
\ to chain up and restrain the poor. If the wars eat us not up, they will;\
\ and there's all the love they bear us. MENENIUS. Either you must Confess\
\ yourselves wondrous malicious, Or be accus'd of folly. I shall tell you \
\ A pretty tale. It may be you have heard it; But, since it serves my purpose,\
\ I will venture To stale't a little more. FIRST CITIZEN. Well, I'll hear\
\ it, sir; yet you must not think to fob off our disgrace with a tale. But,\
\ an't please you, deliver. MENENIUS. There was a time when all the body's members\
\ Rebell'd against the belly; thus accus'd it: That only like a gulf it\
\ did remain I' th' midst o' th' body, idle and unactive, Still cupboarding\
\ the viand, never bearing Like labour with the rest; where th' other instruments\
\ Did see and hear, devise, instruct, walk, feel,\n And, mutually participate,\
\ did minister\n"
- source_sentence: How does the excerpt reflect themes of loyalty and sacrifice in
the play?
sentences:
- "me a thousand marks in links and torches, walking with thee in\n the night\
\ betwixt tavern and tavern; but the sack that thou hast drunk me would have\
\ bought me lights as good cheap at the dearest chandler's in Europe. I have\
\ maintained that salamander of yours with fire any time this two-and-thirty\
\ years. God reward me for it! Bard. 'Sblood, I would my face were in your\
\ belly! Fal. God-a-mercy! so should I be sure to be heart-burn'd.\n \
\ Enter Hostess. How now, Dame Partlet the hen? Have you enquir'd\
\ yet who pick'd\n my pocket? Host. Why, Sir John, what do you think, Sir\
\ John? Do you think I keep thieves in my house? I have search'd, I have enquired,\
\ so has my husband, man by man, boy by boy, servant by servant. The tithe\
\ of a hair was never lost in my house before. Fal. Ye lie, hostess. Bardolph\
\ was shav'd and lost many a hair, and I'll be sworn my pocket was pick'd.\
\ Go to, you are a woman, go! Host. Who, I? No; I defy thee! God's light, I was\
\ never call'd so in mine own house before! Fal. Go to, I know you well enough.\
\ Host. No, Sir John; you do not know me, Sir John. I know you, Sir John.\
\ You owe me money, Sir John, and now you pick a quarrel to beguile me of\
\ it. I bought you a dozen of shirts to your back. Fal. Dowlas, filthy dowlas!\
\ I have given them away to bakers' wives; they have made bolters of them.\
\ Host. Now, as I am a true woman, holland of eight shillings an ell. You\
\ owe money here besides, Sir John, for your diet and by-drinkings, and money\
\ lent you, four-and-twenty pound. Fal. He had his part of it; let him pay. \
\ Host. He? Alas, he is poor; he hath nothing. Fal. How? Poor? Look upon his\
\ face. What call you rich? Let them coin his nose, let them coin his cheeks.\
\ I'll not pay a denier.\n What, will you make a younker of me? Shall I not\
\ take mine ease\n"
- "EDWARD. I wonder how our princely father scap'd,\n Or whether he be scap'd\
\ away or no From Clifford's and Northumberland's pursuit. Had he been ta'en,\
\ we should have heard the news; Had he been slain, we should have heard the\
\ news; Or had he scap'd, methinks we should have heard The happy tidings\
\ of his good escape. How fares my brother? Why is he so sad? RICHARD. I cannot\
\ joy until I be resolv'd Where our right valiant father is become. I saw\
\ him in the battle range about, And watch'd him how he singled Clifford forth.\
\ Methought he bore him in the thickest troop As doth a lion in a herd of\
\ neat;\n Or as a bear, encompass'd round with dogs,\n Who having pinch'd\
\ a few and made them cry, The rest stand all aloof and bark at him. So\
\ far'd our father with his enemies; So fled his enemies my warlike father.\
\ Methinks 'tis prize enough to be his son. See how the morning opes her\
\ golden gates And takes her farewell of the glorious sun. How well resembles\
\ it the prime of youth, Trimm'd like a younker prancing to his love! EDWARD.\
\ Dazzle mine eyes, or do I see three suns? RICHARD. Three glorious suns, each\
\ one a perfect sun; Not separated with the racking clouds, But sever'd\
\ in a pale clear-shining sky. See, see! they join, embrace, and seem to kiss,\
\ As if they vow'd some league inviolable. Now are they but one lamp, one\
\ light, one sun. In this the heaven figures some event. EDWARD. 'Tis wondrous\
\ strange, the like yet never heard of. I think it cites us, brother, to the\
\ field, That we, the sons of brave Plantagenet, Each one already blazing\
\ by our meeds, Should notwithstanding join our lights together And overshine\
\ the earth, as this the world. Whate'er it bodes, henceforward will I bear\
\ Upon my target three fair shining suns. RICHARD. Nay, bear three daughters-\
\ by your leave I speak it, You love the breeder better than the male.\n"
- "Forget that rarest treasure of your cheek,\n Exposing it- but, O, the harder\
\ heart! Alack, no remedy!- to the greedy touch Of common-kissing Titan,\
\ and forget Your laboursome and dainty trims wherein You made great Juno\
\ angry. IMOGEN. Nay, be brief; I see into thy end, and am almost A man\
\ already. PISANIO. First, make yourself but like one. Fore-thinking this,\
\ I have already fit- 'Tis in my cloak-bag- doublet, hat, hose, all That\
\ answer to them. Would you, in their serving, And with what imitation you\
\ can borrow From youth of such a season, fore noble Lucius Present yourself,\
\ desire his service, tell him Wherein you're happy- which will make him know\
\ If that his head have ear in music; doubtless With joy he will embrace\
\ you; for he's honourable, And, doubling that, most holy. Your means abroad-\
\ You have me, rich; and I will never fail Beginning nor supplyment. IMOGEN.\
\ Thou art all the comfort The gods will diet me with. Prithee away! There's\
\ more to be consider'd; but we'll even All that good time will give us. This\
\ attempt I am soldier to, and will abide it with A prince's courage. Away,\
\ I prithee. PISANIO. Well, madam, we must take a short farewell, Lest, being\
\ miss'd, I be suspected of Your carriage from the court. My noble mistress,\
\ Here is a box; I had it from the Queen. What's in't is precious. If you\
\ are sick at sea Or stomach-qualm'd at land, a dram of this\n Will drive\
\ away distemper. To some shade,\n And fit you to your manhood. May the gods\
\ Direct you to the best! IMOGEN. Amen. I thank thee. Exeunt\
\ severally\n"
- source_sentence: The excerpt showcases the emotional turmoil and sense of honor
that drives Brutus to take his own life in the face of defeat.
sentences:
- "Thou know'st that we two went to school together;\n Even for that our love\
\ of old, I prithee, Hold thou my sword-hilts, whilst I run on it. VOLUMNIUS.\
\ That's not an office for a friend, my lord. \
\ Alarum still. CLITUS. Fly, fly, my lord, there is no tarrying\
\ here. BRUTUS. Farewell to you, and you, and you, Volumnius. Strato, thou\
\ hast been all this while asleep; Farewell to thee too, Strato. Countrymen,\
\ My heart doth joy that yet in all my life I found no man but he was true\
\ to me. I shall have glory by this losing day, More than Octavius and Mark\
\ Antony By this vile conquest shall attain unto. So, fare you well at once,\
\ for Brutus' tongue Hath almost ended his life's history. Night hangs upon\
\ mine eyes, my bones would rest That have but labor'd to attain this hour.\
\ Alarum. Cry within, \"Fly, fly, fly!\" CLITUS. Fly,\
\ my lord, fly. BRUTUS. Hence! I will follow. Exeunt Clitus,\
\ Dardanius, and Volumnius. I prithee, Strato, stay thou by thy lord. Thou\
\ art a fellow of a good respect; Thy life hath had some smatch of honor in\
\ it. Hold then my sword, and turn away thy face, While I do run upon it.\
\ Wilt thou, Strato? STRATO. Give me your hand first. Fare you well, my lord.\
\ BRUTUS. Farewell, good Strato. Runs on his sword. Caesar,\
\ now be still; I kill'd not thee with half so good a will. Dies.\n\
\ Alarum. Retreat. Enter Octavius, Antony, Messala,\n Lucilius,\
\ and the Army.\n OCTAVIUS. What man is that?\n"
- "Elsinore. A room in the Castle.\nEnter King, Queen, Polonius, Ophelia, Rosencrantz,\
\ Guildenstern, and Lords. King. And can you by no drift of circumstance\n \
\ Get from him why he puts on this confusion, Grating so harshly all his days\
\ of quiet With turbulent and dangerous lunacy? Ros. He does confess he feels\
\ himself distracted, But from what cause he will by no means speak. Guil.\
\ Nor do we find him forward to be sounded, But with a crafty madness keeps\
\ aloof When we would bring him on to some confession Of his true state.\
\ Queen. Did he receive you well? Ros. Most like a gentleman. Guil. But with\
\ much forcing of his disposition. Ros. Niggard of question, but of our demands\
\ Most free in his reply. Queen. Did you assay him To any pastime? Ros.\
\ Madam, it so fell out that certain players\n We o'erraught on the way.\
\ Of these we told him,\n"
- "VII.\nThe French camp near Agincourt\nEnter the CONSTABLE OF FRANCE, the LORD\
\ RAMBURES, the DUKE OF ORLEANS,\nthe DAUPHIN, with others\n CONSTABLE. Tut!\
\ I have the best armour of the world.\n Would it were day! ORLEANS. You have\
\ an excellent armour; but let my horse have his due. CONSTABLE. It is the\
\ best horse of Europe. ORLEANS. Will it never be morning? DAUPHIN. My Lord\
\ of Orleans and my Lord High Constable, you talk of horse and armour? ORLEANS.\
\ You are as well provided of both as any prince in the world. DAUPHIN. What\
\ a long night is this! I will not change my horse with any that treads but\
\ on four pasterns. Ca, ha! he bounds from the earth as if his entrails were\
\ hairs; le cheval volant, the Pegasus, chez les narines de feu! When I bestride\
\ him I soar, I am a hawk. He trots the air; the earth sings when he touches\
\ it; the basest horn of his hoof is more musical than the pipe of Hermes.\
\ ORLEANS. He's of the colour of the nutmeg. DAUPHIN. And of the heat of the\
\ ginger. It is a beast for Perseus: he is pure air and fire; and the dull\
\ elements of earth and water never appear in him, but only in patient stillness\
\ while his rider mounts him; he is indeed a horse, and all other jades you\
\ may call beasts. CONSTABLE. Indeed, my lord, it is a most absolute and excellent\
\ horse.\n DAUPHIN. It is the prince of palfreys; his neigh is like the\n"
- source_sentence: What themes are present in the excerpt from the play?
sentences:
- "Enter TRAVERS NORTHUMBERLAND. Here comes my servant Travers, whom I sent\n \
\ On Tuesday last to listen after news. LORD BARDOLPH. My lord, I over-rode\
\ him on the way; And he is furnish'd with no certainties More than he haply\
\ may retail from me. NORTHUMBERLAND. Now, Travers, what good tidings comes with\
\ you? TRAVERS. My lord, Sir John Umfrevile turn'd me back With joyful tidings;\
\ and, being better hors'd, Out-rode me. After him came spurring hard A\
\ gentleman, almost forspent with speed, That stopp'd by me to breathe his\
\ bloodied horse. He ask'd the way to Chester; and of him I did demand what\
\ news from Shrewsbury. He told me that rebellion had bad luck, And that\
\ young Harry Percy's spur was cold. With that he gave his able horse the\
\ head And, bending forward, struck his armed heels\n Against the panting\
\ sides of his poor jade\n Up to the rowel-head; and starting so, He seem'd\
\ in running to devour the way, Staying no longer question. NORTHUMBERLAND.\
\ Ha! Again: Said he young Harry Percy's spur was cold? Of Hotspur, Coldspur?\
\ that rebellion Had met ill luck? LORD BARDOLPH. My lord, I'll tell you what:\
\ If my young lord your son have not the day, Upon mine honour, for a silken\
\ point I'll give my barony. Never talk of it. NORTHUMBERLAND. Why should\
\ that gentleman that rode by Travers Give then such instances of loss? LORD\
\ BARDOLPH. Who- he? He was some hilding fellow that had stol'n The horse\
\ he rode on and, upon my life, Spoke at a venture. Look, here comes more news.\
\ \n Enter Morton NORTHUMBERLAND. Yea, this man's brow,\
\ like to a title-leaf,\n"
- "ANTONY. Yet they are not join'd. Where yond pine does stand\n I shall discover\
\ all. I'll bring thee word Straight how 'tis like to go. \
\ Exit SCARUS. Swallows have built In Cleopatra's sails their nests.\
\ The augurers Say they know not, they cannot tell; look grimly, And dare\
\ not speak their knowledge. Antony Is valiant and dejected; and by starts\
\ His fretted fortunes give him hope and fear Of what he has and has not.\
\ [Alarum afar off, as at a sea-fight]\n \
\ Re-enter ANTONY ANTONY. All is lost!\n This foul Egyptian hath\
\ betrayed me. My fleet hath yielded to the foe, and yonder They cast\
\ their caps up and carouse together Like friends long lost. Triple-turn'd\
\ whore! 'tis thou\n Hast sold me to this novice; and my heart\n Makes\
\ only wars on thee. Bid them all fly; For when I am reveng'd upon my charm,\
\ I have done all. Bid them all fly; begone. Exit SCARUS O sun, thy\
\ uprise shall I see no more! Fortune and Antony part here; even here Do\
\ we shake hands. All come to this? The hearts That spaniel'd me at heels,\
\ to whom I gave Their wishes, do discandy, melt their sweets On blossoming\
\ Caesar; and this pine is bark'd That overtopp'd them all. Betray'd I am.\
\ O this false soul of Egypt! this grave charm- Whose eye beck'd forth my\
\ wars and call'd them home, Whose bosom was my crownet, my chief end- Like\
\ a right gypsy hath at fast and loose Beguil'd me to the very heart of loss.\
\ What, Eros, Eros! Enter CLEOPATRA\n Ah, thou spell!\
\ Avaunt!\n"
- "TALBOT. Saint George and victory! Fight, soldiers, fight.\n The Regent hath\
\ with Talbot broke his word And left us to the rage of France his sword. \
\ Where is John Talbot? Pause and take thy breath; I gave thee life and rescu'd\
\ thee from death. JOHN. O, twice my father, twice am I thy son! The life\
\ thou gav'st me first was lost and done Till with thy warlike sword, despite\
\ of fate, To my determin'd time thou gav'st new date. TALBOT. When from the\
\ Dauphin's crest thy sword struck fire, It warm'd thy father's heart with\
\ proud desire Of bold-fac'd victory. Then leaden age, Quicken'd with youthful\
\ spleen and warlike rage, Beat down Alencon, Orleans, Burgundy, And from\
\ the pride of Gallia rescued thee. The ireful bastard Orleans, that drew blood\
\ From thee, my boy, and had the maidenhood Of thy first fight, I soon encountered\
\ And, interchanging blows, I quickly shed Some of his bastard blood; and\
\ in disgrace\n Bespoke him thus: 'Contaminated, base,\n"
- source_sentence: What is the significance of the tennis balls in the excerpt from
the play?
sentences:
- "My fault is past. But, O, what form of prayer\n Can serve my turn? 'Forgive\
\ me my foul murther'? That cannot be; since I am still possess'd Of those\
\ effects for which I did the murther- My crown, mine own ambition, and my\
\ queen. May one be pardon'd and retain th' offence? In the corrupted currents\
\ of this world Offence's gilded hand may shove by justice, And oft 'tis\
\ seen the wicked prize itself Buys out the law; but 'tis not so above. \
\ There is no shuffling; there the action lies In his true nature, and we ourselves\
\ compell'd, Even to the teeth and forehead of our faults, To give in evidence.\
\ What then? What rests? Try what repentance can. What can it not? Yet what\
\ can it when one cannot repent? O wretched state! O bosom black as death!\
\ O limed soul, that, struggling to be free, Art more engag'd! Help, angels!\
\ Make assay. Bow, stubborn knees; and heart with strings of steel, Be\
\ soft as sinews of the new-born babe! All may be well. \
\ He kneels.\n Enter Hamlet. Ham. Now might\
\ I do it pat, now he is praying;\n And now I'll do't. And so he goes to heaven,\
\ And so am I reveng'd. That would be scann'd. A villain kills my father;\
\ and for that, I, his sole son, do this same villain send To heaven. \
\ Why, this is hire and salary, not revenge! He took my father grossly, full\
\ of bread, With all his crimes broad blown, as flush as May; And how his\
\ audit stands, who knows save heaven?\n But in our circumstance and course\
\ of thought,\n"
- "YORK. From Ireland thus comes York to claim his right\n And pluck the crown\
\ from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright,\
\ To entertain great England's lawful king. Ah, sancta majestas! who would\
\ not buy thee dear? Let them obey that knows not how to rule; This hand\
\ was made to handle nought but gold. I cannot give due action to my words\
\ Except a sword or sceptre balance it.\n A sceptre shall it have, have\
\ I a soul\n On which I'll toss the flower-de-luce of France.\n \
\ Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb\
\ me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York,\
\ if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept\
\ thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger\
\ from Henry, our dread liege, To know the reason of these arms in peace; \
\ Or why thou, being a subject as I am, Against thy oath and true allegiance\
\ sworn, Should raise so great a power without his leave, Or dare to bring\
\ thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is\
\ so great. O, I could hew up rocks and fight with flint, I am so angry\
\ at these abject terms; And now, like Ajax Telamonius, On sheep or oxen\
\ could I spend my fury. I am far better born than is the King, More like\
\ a king, more kingly in my thoughts; But I must make fair weather yet awhile,\
\ Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon\
\ me That I have given no answer all this while; My mind was troubled with\
\ deep melancholy. The cause why I have brought this army hither Is to\
\ remove proud Somerset from the King, Seditious to his Grace and to the state.\
\ BUCKINGHAM. That is too much presumption on thy part; But if thy arms be\
\ to no other end, The King hath yielded unto thy demand:\n The Duke of\
\ Somerset is in the Tower.\n"
- "Says that you savour too much of your youth,\n And bids you be advis'd there's\
\ nought in France That can be with a nimble galliard won; You cannot revel\
\ into dukedoms there. He therefore sends you, meeter for your spirit, This\
\ tun of treasure; and, in lieu of this, Desires you let the dukedoms that\
\ you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What\
\ treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the\
\ Dauphin is so pleasant with us; His present and your pains we thank you for.\
\ When we have match'd our rackets to these balls, We will in France,\
\ by God's grace, play a set Shall strike his father's crown into the hazard.\
\ Tell him he hath made a match with such a wrangler That all the courts\
\ of France will be disturb'd With chaces. And we understand him well, How\
\ he comes o'er us with our wilder days, Not measuring what use we made of\
\ them. We never valu'd this poor seat of England; And therefore, living\
\ hence, did give ourself To barbarous licence; as 'tis ever common That\
\ men are merriest when they are from home. But tell the Dauphin I will keep\
\ my state, Be like a king, and show my sail of greatness, When I do rouse\
\ me in my throne of France; For that I have laid by my majesty And plodded\
\ like a man for working-days; But I will rise there with so full a glory \
\ That I will dazzle all the eyes of France, Yea, strike the Dauphin blind\
\ to look on us. And tell the pleasant Prince this mock of his Hath turn'd\
\ his balls to gun-stones, and his soul Shall stand sore charged for the wasteful\
\ vengeance\n That shall fly with them; for many a thousand widows\n"
model-index:
- name: RAG_general/rerank/models/BAAI-bge-large-en-v1.5-ft
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: large dev
type: large-dev
metrics:
- type: cosine_accuracy@3
value: 0.5243266724587315
name: Cosine Accuracy@3
- type: cosine_precision@1
value: 0.4161598609904431
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17477555748624385
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11268462206776718
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.060729800173761936
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4161598609904431
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5243266724587315
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5634231103388357
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6072980017376195
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5090845268414399
name: Cosine Ndcg@10
- type: cosine_mrr@200
value: 0.483708993138636
name: Cosine Mrr@200
- type: cosine_map@100
value: 0.483416229474969
name: Cosine Map@100
- type: dot_accuracy@3
value: 0.5243266724587315
name: Dot Accuracy@3
- type: dot_precision@1
value: 0.4161598609904431
name: Dot Precision@1
- type: dot_precision@3
value: 0.17477555748624385
name: Dot Precision@3
- type: dot_precision@5
value: 0.11268462206776718
name: Dot Precision@5
- type: dot_precision@10
value: 0.060729800173761936
name: Dot Precision@10
- type: dot_recall@1
value: 0.4161598609904431
name: Dot Recall@1
- type: dot_recall@3
value: 0.5243266724587315
name: Dot Recall@3
- type: dot_recall@5
value: 0.5634231103388357
name: Dot Recall@5
- type: dot_recall@10
value: 0.6072980017376195
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.5090845268414399
name: Dot Ndcg@10
- type: dot_mrr@200
value: 0.483708993138636
name: Dot Mrr@200
- type: dot_map@100
value: 0.483416229474969
name: Dot Map@100
---
# RAG_general/rerank/models/BAAI-bge-large-en-v1.5-ft
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) <!-- at revision d4aa6901d3a41ba39fb536a557fa166f842b0e09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/BAAI-bge-large-en-v1.5-fine-tuned")
# Run inference
sentences = [
'What is the significance of the tennis balls in the excerpt from the play?',
"Says that you savour too much of your youth,\n And bids you be advis'd there's nought in France That can be with a nimble galliard won; You cannot revel into dukedoms there. He therefore sends you, meeter for your spirit, This tun of treasure; and, in lieu of this, Desires you let the dukedoms that you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the Dauphin is so pleasant with us; His present and your pains we thank you for. When we have match'd our rackets to these balls, We will in France, by God's grace, play a set Shall strike his father's crown into the hazard. Tell him he hath made a match with such a wrangler That all the courts of France will be disturb'd With chaces. And we understand him well, How he comes o'er us with our wilder days, Not measuring what use we made of them. We never valu'd this poor seat of England; And therefore, living hence, did give ourself To barbarous licence; as 'tis ever common That men are merriest when they are from home. But tell the Dauphin I will keep my state, Be like a king, and show my sail of greatness, When I do rouse me in my throne of France; For that I have laid by my majesty And plodded like a man for working-days; But I will rise there with so full a glory That I will dazzle all the eyes of France, Yea, strike the Dauphin blind to look on us. And tell the pleasant Prince this mock of his Hath turn'd his balls to gun-stones, and his soul Shall stand sore charged for the wasteful vengeance\n That shall fly with them; for many a thousand widows\n",
"YORK. From Ireland thus comes York to claim his right\n And pluck the crown from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright, To entertain great England's lawful king. Ah, sancta majestas! who would not buy thee dear? Let them obey that knows not how to rule; This hand was made to handle nought but gold. I cannot give due action to my words Except a sword or sceptre balance it.\n A sceptre shall it have, have I a soul\n On which I'll toss the flower-de-luce of France.\n Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York, if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger from Henry, our dread liege, To know the reason of these arms in peace; Or why thou, being a subject as I am, Against thy oath and true allegiance sworn, Should raise so great a power without his leave, Or dare to bring thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is so great. O, I could hew up rocks and fight with flint, I am so angry at these abject terms; And now, like Ajax Telamonius, On sheep or oxen could I spend my fury. I am far better born than is the King, More like a king, more kingly in my thoughts; But I must make fair weather yet awhile, Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon me That I have given no answer all this while; My mind was troubled with deep melancholy. The cause why I have brought this army hither Is to remove proud Somerset from the King, Seditious to his Grace and to the state. BUCKINGHAM. That is too much presumption on thy part; But if thy arms be to no other end, The King hath yielded unto thy demand:\n The Duke of Somerset is in the Tower.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `large-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@3 | 0.5243 |
| cosine_precision@1 | 0.4162 |
| cosine_precision@3 | 0.1748 |
| cosine_precision@5 | 0.1127 |
| cosine_precision@10 | 0.0607 |
| cosine_recall@1 | 0.4162 |
| cosine_recall@3 | 0.5243 |
| cosine_recall@5 | 0.5634 |
| cosine_recall@10 | 0.6073 |
| cosine_ndcg@10 | 0.5091 |
| cosine_mrr@200 | 0.4837 |
| **cosine_map@100** | **0.4834** |
| dot_accuracy@3 | 0.5243 |
| dot_precision@1 | 0.4162 |
| dot_precision@3 | 0.1748 |
| dot_precision@5 | 0.1127 |
| dot_precision@10 | 0.0607 |
| dot_recall@1 | 0.4162 |
| dot_recall@3 | 0.5243 |
| dot_recall@5 | 0.5634 |
| dot_recall@10 | 0.6073 |
| dot_ndcg@10 | 0.5091 |
| dot_mrr@200 | 0.4837 |
| dot_map@100 | 0.4834 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,359 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 22.32 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 351.19 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is the general being described in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>What is the main conflict highlighted in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>The excerpt showcases the tension between Antony's loyalty to Cleopatra and his obligations to Caesar, as well as Cleopatra's influence over him.</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,302 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 21.73 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 354.59 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The excerpt highlights the tension between Antony's loyalty to Cleopatra and his standing in Rome, showcasing the intricate balance of power and love in the play.</code> | <code>When shrill-tongu'd Fulvia scolds. The messengers!<br> ANTONY. Let Rome in Tiber melt, and the wide arch Of the rang'd empire fall! Here is my space. Kingdoms are clay; our dungy earth alike Feeds beast as man. The nobleness of life Is to do thus [emhracing], when such a mutual pair And such a twain can do't, in which I bind, On pain of punishment, the world to weet We stand up peerless. CLEOPATRA. Excellent falsehood! Why did he marry Fulvia, and not love her? I'll seem the fool I am not. Antony Will be himself. ANTONY. But stirr'd by Cleopatra. Now for the love of Love and her soft hours, Let's not confound the time with conference harsh; There's not a minute of our lives should stretch Without some pleasure now. What sport to-night? CLEOPATRA. Hear the ambassadors. ANTONY. Fie, wrangling queen! Whom everything becomes- to chide, to laugh, To weep; whose every passion fully strives To make itself in thee fair and admir'd. No messenger but thine, and all alone To-night we'll wander through the streets and note The qualities of people. Come, my queen; Last night you did desire it. Speak not to us. Exeunt ANTONY and CLEOPATRA, with the train DEMETRIUS. Is Caesar with Antonius priz'd so slight? PHILO. Sir, sometimes when he is not Antony, He comes too short of that great property Which still should go with Antony. DEMETRIUS. I am full sorry That he approves the common liar, who Thus speaks of him at Rome; but I will hope<br> Of better deeds to-morrow. Rest you happy! Exeunt<br></code> |
| <code>What is the significance of the soothsayer in the context of the play?</code> | <code>CHARMIAN. Lord Alexas, sweet Alexas, most anything Alexas, almost<br> most absolute Alexas, where's the soothsayer that you prais'd so to th' Queen? O that I knew this husband, which you say must charge his horns with garlands! ALEXAS. Soothsayer! SOOTHSAYER. Your will? CHARMIAN. Is this the man? Is't you, sir, that know things? SOOTHSAYER. In nature's infinite book of secrecy A little I can read. ALEXAS. Show him your hand.<br> Enter ENOBARBUS ENOBARBUS. Bring in the banquet quickly; wine enough<br> Cleopatra's health to drink. CHARMIAN. Good, sir, give me good fortune. SOOTHSAYER. I make not, but foresee. CHARMIAN. Pray, then, foresee me one. SOOTHSAYER. You shall be yet far fairer than you are. CHARMIAN. He means in flesh. IRAS. No, you shall paint when you are old. CHARMIAN. Wrinkles forbid! ALEXAS. Vex not his prescience; be attentive. CHARMIAN. Hush!<br> SOOTHSAYER. You shall be more beloving than beloved.<br></code> |
| <code>What is the setting of the scene in which the excerpt takes place?</code> | <code>sweet Isis, I beseech thee! And let her die too, and give him a<br> worse! And let worse follow worse, till the worst of all follow him laughing to his grave, fiftyfold a cuckold! Good Isis, hear me this prayer, though thou deny me a matter of more weight; good Isis, I beseech thee! IRAS. Amen. Dear goddess, hear that prayer of the people! For, as it is a heartbreaking to see a handsome man loose-wiv'd, so it is a deadly sorrow to behold a foul knave uncuckolded. Therefore, dear Isis, keep decorum, and fortune him accordingly! CHARMIAN. Amen. ALEXAS. Lo now, if it lay in their hands to make me a cuckold, they would make themselves whores but they'ld do't!<br> Enter CLEOPATRA ENOBARBUS. Hush! Here comes Antony.<br> CHARMIAN. Not he; the Queen. CLEOPATRA. Saw you my lord? ENOBARBUS. No, lady. CLEOPATRA. Was he not here? CHARMIAN. No, madam. CLEOPATRA. He was dispos'd to mirth; but on the sudden A Roman thought hath struck him. Enobarbus! ENOBARBUS. Madam? CLEOPATRA. Seek him, and bring him hither. Where's Alexas? ALEXAS. Here, at your service. My lord approaches.<br> Enter ANTONY, with a MESSENGER and attendants CLEOPATRA. We will not look upon him. Go with us.<br> Exeunt CLEOPATRA, ENOBARBUS, and the rest MESSENGER. Fulvia thy wife first came into the field. ANTONY. Against my brother Lucius? MESSENGER. Ay.<br> But soon that war had end, and the time's state<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 3e-05
- `num_train_epochs`: 4
- `warmup_steps`: 50
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | large-dev_cosine_map@100 |
|:-------:|:--------:|:-------------:|:----------:|:------------------------:|
| 1.0 | 324 | - | 1.5357 | 0.4824 |
| 1.5432 | 500 | 1.7247 | - | - |
| 2.0 | 648 | - | 1.5137 | 0.4806 |
| 3.0 | 972 | - | 1.5700 | 0.4732 |
| 3.0864 | 1000 | 0.8627 | - | - |
| **4.0** | **1296** | **-** | **1.5816** | **0.4834** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
bhargavis/fulltrain-xsum-bart | bhargavis | summarization | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"fine-tuning",
"bart-large",
"xsum",
"summarization",
"en",
"dataset:EdinburghNLP/xsum",
"base_model:facebook/bart-large",
"base_model:finetune:facebook/bart-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-05T17:09:40 | 2025-02-15T21:00:07 | 50 | 1 | ---
base_model:
- facebook/bart-large
datasets:
- EdinburghNLP/xsum
language:
- en
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: summarization
tags:
- fine-tuning
- bart-large
- xsum
new_version: facebook/bart-large
---
## Model Description
#### Model - fulltrain-xsum-bart
- Architecture - BART (Bidirectional and Auto-Regressive Transformers)
- Task - Abstractive Summarization
- Dataset - XSum (Extreme Summarization)
- Training Hardware - 2x NVIDIA T4 GPUs (using Kaggle)
- Training Time: ~9 hours
This model is fine-tuned on the XSum dataset for abstractive summarization tasks. It takes a long document as input and generates a concise summary
#### Dataset Details
- Train Dataset - 204,045 samples
- Validation Dataset - 11,332 samples
- Test Dataset - 11,334 samples
The XSum dataset consists of BBC articles and their corresponding single-sentence summaries. The model was trained to generate summaries that are concise and capture the essence of the input document.
Training Details
| Training Parameter | Value |
| ------------- |:-------------:|
| Training Epochs | 1 |
| Batch Size | 8 (per device) |
| Learning Rate | 5e-5 |
| Weight Decay | 0.01 |
| Warmup Steps | 500 |
| FP16 Training | Enabled |
| Evaluation Strategy | Per Epoch |
| Best Model Selection | Based on validation loss (eval_loss) |
#### Evaluation Metrics
The model was evaluated using the following metrics.
| Metric | Score |
| ------------- |:-------------:|
| Training Loss | 0.3771 |
| Validation Loss | 0.350379 |
| Rouge-1 | 0.401344019 |
| Rouge-2 | 0.188076798 |
| Rouge-L | 0.33460693 |
These metrics were computed using the `rouge_scorer` library for ROUGE scores.
#### Training Arguments
The model was trained using the following Hugging Face Seq2SeqTrainingArguments:
| Arguments | Value |
| ------------- |:-------------:|
| Save Strategy | Per Epoch |
| Logging Steps | 1000 |
| Dataloader Workers | 4 |
| Predict with Generate | True |
| Load Best Model at End | True |
| Metric for Best Model | eval_loss |
| Greater is Better | False (Lower validation loss is better) |
| Report To | Weights & Biases (WandB) |
##### Other considerations
- The model was fine tuned on the XSum dataset, which consists of BBC articles. Its performance may vary on other domains or types of text. The model may inherit biases present in the XSum dataset, which consists of BBC articles.
- The model generates summaries based on patterns learned during training. It may occasionally produce inaccurate or misleading summaries, especially for complex or ambiguous input text.
- The model may struggle with highly technical or domain-specific content, as it was not explicitly trained on such data.
- The model generates summaries in English only.
### Usage
Below is an example of how to load and use the model:
```
from transformers import pipeline
# Load the few-shot model
summarizer = pipeline("summarization", model="bhargavis/fulltrain-xsum-bart")
# Provide input text
input_text = """
Authorities have issued a warning after multiple sightings of a large brown bear in the woods. The bear is known to become aggressive if disturbed, and residents are urged to exercise caution. Last week, a group of hikers reported a close encounter with the animal. While no injuries were sustained, the bear displayed defensive behavior when approached. Wildlife officials advise keeping a safe distance and avoiding the area if possible. Those encountering the bear should remain calm, back away slowly, and refrain from making sudden movements. Officials continue to monitor the situation.
"""
# Generate summary
summary = summarizer(input_text, max_length=64, min_length=30, do_sample=False)
print(summary[0]["summary_text"])
``` | [
"SUMMARIZATION"
] | [
"BEAR"
] |
danbev/granite-embedding-30m-english-Q8_0-GGUF | danbev | sentence-similarity | [
"transformers",
"gguf",
"language",
"granite",
"embeddings",
"mteb",
"llama-cpp",
"gguf-my-repo",
"sentence-similarity",
"en",
"base_model:ibm-granite/granite-embedding-30m-english",
"base_model:quantized:ibm-granite/granite-embedding-30m-english",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 2025-02-12T11:25:14 | 2025-02-12T11:25:21 | 50 | 0 | ---
base_model: ibm-granite/granite-embedding-30m-english
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- language
- granite
- embeddings
- mteb
- llama-cpp
- gguf-my-repo
model-index:
- name: ibm-granite/granite-embedding-30m-english
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 62.856100000000005
- type: f1
value: 51.5046
- type: f1_weighted
value: 69.9775
- type: ap
value: 15.4995
- type: ap_weighted
value: 15.4995
- type: main_score
value: 62.856100000000005
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 60.925399999999996
- type: f1
value: 55.0092
- type: f1_weighted
value: 64.8014
- type: ap
value: 25.0517
- type: ap_weighted
value: 25.0517
- type: main_score
value: 60.925399999999996
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 62.983599999999996
- type: f1
value: 62.553599999999996
- type: f1_weighted
value: 62.553599999999996
- type: ap
value: 58.3423
- type: ap_weighted
value: 58.3423
- type: main_score
value: 62.983599999999996
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 32.178000000000004
- type: f1
value: 31.5201
- type: f1_weighted
value: 31.5201
- type: main_score
value: 32.178000000000004
- task:
type: Retrieval
dataset:
name: MTEB AppsRetrieval (default)
type: CoIR-Retrieval/apps
config: default
split: test
revision: f22508f96b7a36c2415181ed8bb76f76e04ae2d5
metrics:
- type: ndcg_at_1
value: 3.5060000000000002
- type: ndcg_at_3
value: 4.789000000000001
- type: ndcg_at_5
value: 5.314
- type: ndcg_at_10
value: 6.203
- type: ndcg_at_20
value: 6.801
- type: ndcg_at_100
value: 8.588
- type: ndcg_at_1000
value: 12.418999999999999
- type: map_at_1
value: 3.5060000000000002
- type: map_at_3
value: 4.471
- type: map_at_5
value: 4.7620000000000005
- type: map_at_10
value: 5.117
- type: map_at_20
value: 5.281000000000001
- type: map_at_100
value: 5.501
- type: map_at_1000
value: 5.611
- type: recall_at_1
value: 3.5060000000000002
- type: recall_at_3
value: 5.71
- type: recall_at_5
value: 6.984999999999999
- type: recall_at_10
value: 9.801
- type: recall_at_20
value: 12.165
- type: recall_at_100
value: 22.205
- type: recall_at_1000
value: 54.396
- type: precision_at_1
value: 3.5060000000000002
- type: precision_at_3
value: 1.9029999999999998
- type: precision_at_5
value: 1.397
- type: precision_at_10
value: 0.98
- type: precision_at_20
value: 0.608
- type: precision_at_100
value: 0.22200000000000003
- type: precision_at_1000
value: 0.054
- type: mrr_at_1
value: 3.5060000000000002
- type: mrr_at_3
value: 4.471
- type: mrr_at_5
value: 4.7618
- type: mrr_at_10
value: 5.1166
- type: mrr_at_20
value: 5.2806
- type: mrr_at_100
value: 5.5014
- type: mrr_at_1000
value: 5.6113
- type: nauc_ndcg_at_1_max
value: 32.8089
- type: nauc_ndcg_at_1_std
value: 13.0518
- type: nauc_ndcg_at_1_diff1
value: 44.3602
- type: nauc_ndcg_at_3_max
value: 28.5037
- type: nauc_ndcg_at_3_std
value: 12.1308
- type: nauc_ndcg_at_3_diff1
value: 33.0191
- type: nauc_ndcg_at_5_max
value: 25.970100000000002
- type: nauc_ndcg_at_5_std
value: 12.089500000000001
- type: nauc_ndcg_at_5_diff1
value: 30.098200000000002
- type: nauc_ndcg_at_10_max
value: 23.9177
- type: nauc_ndcg_at_10_std
value: 12.1279
- type: nauc_ndcg_at_10_diff1
value: 26.3951
- type: nauc_ndcg_at_20_max
value: 22.2086
- type: nauc_ndcg_at_20_std
value: 11.355
- type: nauc_ndcg_at_20_diff1
value: 24.9668
- type: nauc_ndcg_at_100_max
value: 20.1961
- type: nauc_ndcg_at_100_std
value: 11.368300000000001
- type: nauc_ndcg_at_100_diff1
value: 21.654200000000003
- type: nauc_ndcg_at_1000_max
value: 19.7802
- type: nauc_ndcg_at_1000_std
value: 11.9399
- type: nauc_ndcg_at_1000_diff1
value: 19.8429
- type: nauc_map_at_1_max
value: 32.8089
- type: nauc_map_at_1_std
value: 13.0518
- type: nauc_map_at_1_diff1
value: 44.3602
- type: nauc_map_at_3_max
value: 29.285600000000002
- type: nauc_map_at_3_std
value: 12.4277
- type: nauc_map_at_3_diff1
value: 35.2678
- type: nauc_map_at_5_max
value: 27.6754
- type: nauc_map_at_5_std
value: 12.4042
- type: nauc_map_at_5_diff1
value: 33.330799999999996
- type: nauc_map_at_10_max
value: 26.571299999999997
- type: nauc_map_at_10_std
value: 12.439400000000001
- type: nauc_map_at_10_diff1
value: 31.275399999999998
- type: nauc_map_at_20_max
value: 25.8795
- type: nauc_map_at_20_std
value: 12.1596
- type: nauc_map_at_20_diff1
value: 30.6354
- type: nauc_map_at_100_max
value: 25.3369
- type: nauc_map_at_100_std
value: 12.0245
- type: nauc_map_at_100_diff1
value: 29.8703
- type: nauc_map_at_1000_max
value: 25.239800000000002
- type: nauc_map_at_1000_std
value: 12.0242
- type: nauc_map_at_1000_diff1
value: 29.7235
- type: nauc_recall_at_1_max
value: 32.8089
- type: nauc_recall_at_1_std
value: 13.0518
- type: nauc_recall_at_1_diff1
value: 44.3602
- type: nauc_recall_at_3_max
value: 26.747700000000002
- type: nauc_recall_at_3_std
value: 11.4203
- type: nauc_recall_at_3_diff1
value: 27.9047
- type: nauc_recall_at_5_max
value: 22.3707
- type: nauc_recall_at_5_std
value: 11.4164
- type: nauc_recall_at_5_diff1
value: 23.4182
- type: nauc_recall_at_10_max
value: 19.2758
- type: nauc_recall_at_10_std
value: 11.578800000000001
- type: nauc_recall_at_10_diff1
value: 18.030099999999997
- type: nauc_recall_at_20_max
value: 16.1643
- type: nauc_recall_at_20_std
value: 9.9037
- type: nauc_recall_at_20_diff1
value: 16.0833
- type: nauc_recall_at_100_max
value: 13.644700000000002
- type: nauc_recall_at_100_std
value: 10.986799999999999
- type: nauc_recall_at_100_diff1
value: 11.0515
- type: nauc_recall_at_1000_max
value: 13.9712
- type: nauc_recall_at_1000_std
value: 13.4048
- type: nauc_recall_at_1000_diff1
value: 6.569500000000001
- type: nauc_precision_at_1_max
value: 32.8089
- type: nauc_precision_at_1_std
value: 13.0518
- type: nauc_precision_at_1_diff1
value: 44.3602
- type: nauc_precision_at_3_max
value: 26.747700000000002
- type: nauc_precision_at_3_std
value: 11.4203
- type: nauc_precision_at_3_diff1
value: 27.9047
- type: nauc_precision_at_5_max
value: 22.3707
- type: nauc_precision_at_5_std
value: 11.4164
- type: nauc_precision_at_5_diff1
value: 23.4182
- type: nauc_precision_at_10_max
value: 19.2758
- type: nauc_precision_at_10_std
value: 11.578800000000001
- type: nauc_precision_at_10_diff1
value: 18.030099999999997
- type: nauc_precision_at_20_max
value: 16.1643
- type: nauc_precision_at_20_std
value: 9.9037
- type: nauc_precision_at_20_diff1
value: 16.0833
- type: nauc_precision_at_100_max
value: 13.644700000000002
- type: nauc_precision_at_100_std
value: 10.986799999999999
- type: nauc_precision_at_100_diff1
value: 11.0515
- type: nauc_precision_at_1000_max
value: 13.9712
- type: nauc_precision_at_1000_std
value: 13.4048
- type: nauc_precision_at_1000_diff1
value: 6.569500000000001
- type: nauc_mrr_at_1_max
value: 32.8089
- type: nauc_mrr_at_1_std
value: 13.0518
- type: nauc_mrr_at_1_diff1
value: 44.3602
- type: nauc_mrr_at_3_max
value: 29.285600000000002
- type: nauc_mrr_at_3_std
value: 12.4277
- type: nauc_mrr_at_3_diff1
value: 35.2678
- type: nauc_mrr_at_5_max
value: 27.6754
- type: nauc_mrr_at_5_std
value: 12.4042
- type: nauc_mrr_at_5_diff1
value: 33.330799999999996
- type: nauc_mrr_at_10_max
value: 26.571299999999997
- type: nauc_mrr_at_10_std
value: 12.439400000000001
- type: nauc_mrr_at_10_diff1
value: 31.275399999999998
- type: nauc_mrr_at_20_max
value: 25.8795
- type: nauc_mrr_at_20_std
value: 12.1596
- type: nauc_mrr_at_20_diff1
value: 30.6354
- type: nauc_mrr_at_100_max
value: 25.337
- type: nauc_mrr_at_100_std
value: 12.0245
- type: nauc_mrr_at_100_diff1
value: 29.870400000000004
- type: nauc_mrr_at_1000_max
value: 25.2399
- type: nauc_mrr_at_1000_std
value: 12.0242
- type: nauc_mrr_at_1000_diff1
value: 29.7236
- type: main_score
value: 6.203
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: ndcg_at_1
value: 31.791999999999998
- type: ndcg_at_3
value: 46.453
- type: ndcg_at_5
value: 51.623
- type: ndcg_at_10
value: 56.355999999999995
- type: ndcg_at_20
value: 58.757000000000005
- type: ndcg_at_100
value: 59.789
- type: ndcg_at_1000
value: 59.857000000000006
- type: map_at_1
value: 31.791999999999998
- type: map_at_3
value: 42.757
- type: map_at_5
value: 45.634
- type: map_at_10
value: 47.599000000000004
- type: map_at_20
value: 48.271
- type: map_at_100
value: 48.425000000000004
- type: map_at_1000
value: 48.427
- type: recall_at_1
value: 31.791999999999998
- type: recall_at_3
value: 57.18299999999999
- type: recall_at_5
value: 69.70100000000001
- type: recall_at_10
value: 84.282
- type: recall_at_20
value: 93.67
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: precision_at_1
value: 31.791999999999998
- type: precision_at_3
value: 19.061
- type: precision_at_5
value: 13.94
- type: precision_at_10
value: 8.427999999999999
- type: precision_at_20
value: 4.683
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 32.3613
- type: mrr_at_3
value: 42.935
- type: mrr_at_5
value: 45.844
- type: mrr_at_10
value: 47.808099999999996
- type: mrr_at_20
value: 48.4844
- type: mrr_at_100
value: 48.6345
- type: mrr_at_1000
value: 48.6364
- type: nauc_ndcg_at_1_max
value: -8.274099999999999
- type: nauc_ndcg_at_1_std
value: -8.1976
- type: nauc_ndcg_at_1_diff1
value: 14.155100000000001
- type: nauc_ndcg_at_3_max
value: -4.6223
- type: nauc_ndcg_at_3_std
value: -10.198500000000001
- type: nauc_ndcg_at_3_diff1
value: 14.516499999999999
- type: nauc_ndcg_at_5_max
value: -4.9834000000000005
- type: nauc_ndcg_at_5_std
value: -9.6634
- type: nauc_ndcg_at_5_diff1
value: 12.9298
- type: nauc_ndcg_at_10_max
value: -4.3251
- type: nauc_ndcg_at_10_std
value: -8.3068
- type: nauc_ndcg_at_10_diff1
value: 12.2939
- type: nauc_ndcg_at_20_max
value: -3.8912000000000004
- type: nauc_ndcg_at_20_std
value: -8.1821
- type: nauc_ndcg_at_20_diff1
value: 12.673599999999999
- type: nauc_ndcg_at_100_max
value: -5.0274
- type: nauc_ndcg_at_100_std
value: -8.450000000000001
- type: nauc_ndcg_at_100_diff1
value: 12.787399999999998
- type: nauc_ndcg_at_1000_max
value: -5.1416
- type: nauc_ndcg_at_1000_std
value: -8.6044
- type: nauc_ndcg_at_1000_diff1
value: 12.858600000000001
- type: nauc_map_at_1_max
value: -8.274099999999999
- type: nauc_map_at_1_std
value: -8.1976
- type: nauc_map_at_1_diff1
value: 14.155100000000001
- type: nauc_map_at_3_max
value: -5.6403
- type: nauc_map_at_3_std
value: -9.7092
- type: nauc_map_at_3_diff1
value: 14.0705
- type: nauc_map_at_5_max
value: -5.8896999999999995
- type: nauc_map_at_5_std
value: -9.3946
- type: nauc_map_at_5_diff1
value: 13.208
- type: nauc_map_at_10_max
value: -5.7523
- type: nauc_map_at_10_std
value: -8.9262
- type: nauc_map_at_10_diff1
value: 12.961500000000001
- type: nauc_map_at_20_max
value: -5.7103
- type: nauc_map_at_20_std
value: -8.9336
- type: nauc_map_at_20_diff1
value: 13.0351
- type: nauc_map_at_100_max
value: -5.8204
- type: nauc_map_at_100_std
value: -8.9441
- type: nauc_map_at_100_diff1
value: 13.0722
- type: nauc_map_at_1000_max
value: -5.8239
- type: nauc_map_at_1000_std
value: -8.9463
- type: nauc_map_at_1000_diff1
value: 13.0724
- type: nauc_recall_at_1_max
value: -8.274099999999999
- type: nauc_recall_at_1_std
value: -8.1976
- type: nauc_recall_at_1_diff1
value: 14.155100000000001
- type: nauc_recall_at_3_max
value: -1.4792
- type: nauc_recall_at_3_std
value: -11.6828
- type: nauc_recall_at_3_diff1
value: 16.026
- type: nauc_recall_at_5_max
value: -1.6868999999999998
- type: nauc_recall_at_5_std
value: -10.5497
- type: nauc_recall_at_5_diff1
value: 11.826
- type: nauc_recall_at_10_max
value: 5.1425
- type: nauc_recall_at_10_std
value: -3.1008999999999998
- type: nauc_recall_at_10_diff1
value: 7.6911
- type: nauc_recall_at_20_max
value: 25.921499999999998
- type: nauc_recall_at_20_std
value: 6.812600000000001
- type: nauc_recall_at_20_diff1
value: 8.311300000000001
- type: nauc_recall_at_100_max
value: 28.425299999999996
- type: nauc_recall_at_100_std
value: 45.9592
- type: nauc_recall_at_100_diff1
value: -11.801
- type: nauc_recall_at_1000_max
value: 21.834500000000002
- type: nauc_recall_at_1000_std
value: 38.804
- type: nauc_recall_at_1000_diff1
value: -3.5484
- type: nauc_precision_at_1_max
value: -8.274099999999999
- type: nauc_precision_at_1_std
value: -8.1976
- type: nauc_precision_at_1_diff1
value: 14.155100000000001
- type: nauc_precision_at_3_max
value: -1.4792
- type: nauc_precision_at_3_std
value: -11.6828
- type: nauc_precision_at_3_diff1
value: 16.026
- type: nauc_precision_at_5_max
value: -1.6868999999999998
- type: nauc_precision_at_5_std
value: -10.5497
- type: nauc_precision_at_5_diff1
value: 11.826
- type: nauc_precision_at_10_max
value: 5.1425
- type: nauc_precision_at_10_std
value: -3.1008999999999998
- type: nauc_precision_at_10_diff1
value: 7.6911
- type: nauc_precision_at_20_max
value: 25.921499999999998
- type: nauc_precision_at_20_std
value: 6.812600000000001
- type: nauc_precision_at_20_diff1
value: 8.311300000000001
- type: nauc_precision_at_100_max
value: 28.425299999999996
- type: nauc_precision_at_100_std
value: 45.9592
- type: nauc_precision_at_100_diff1
value: -11.801
- type: nauc_precision_at_1000_max
value: 21.834500000000002
- type: nauc_precision_at_1000_std
value: 38.804
- type: nauc_precision_at_1000_diff1
value: -3.5484
- type: nauc_mrr_at_1_max
value: -8.6929
- type: nauc_mrr_at_1_std
value: -7.7584
- type: nauc_mrr_at_1_diff1
value: 12.488100000000001
- type: nauc_mrr_at_3_max
value: -6.6954
- type: nauc_mrr_at_3_std
value: -9.7075
- type: nauc_mrr_at_3_diff1
value: 12.2994
- type: nauc_mrr_at_5_max
value: -6.7945
- type: nauc_mrr_at_5_std
value: -9.3751
- type: nauc_mrr_at_5_diff1
value: 11.544699999999999
- type: nauc_mrr_at_10_max
value: -6.6614
- type: nauc_mrr_at_10_std
value: -8.859200000000001
- type: nauc_mrr_at_10_diff1
value: 11.2614
- type: nauc_mrr_at_20_max
value: -6.6408
- type: nauc_mrr_at_20_std
value: -8.8599
- type: nauc_mrr_at_20_diff1
value: 11.3125
- type: nauc_mrr_at_100_max
value: -6.7582
- type: nauc_mrr_at_100_std
value: -8.876299999999999
- type: nauc_mrr_at_100_diff1
value: 11.325000000000001
- type: nauc_mrr_at_1000_max
value: -6.7619
- type: nauc_mrr_at_1000_std
value: -8.878400000000001
- type: nauc_mrr_at_1000_diff1
value: 11.3251
- type: main_score
value: 56.355999999999995
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.813
- type: v_measure_std
value: 13.830899999999998
- type: main_score
value: 46.813
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 41.9895
- type: v_measure_std
value: 14.3004
- type: main_score
value: 41.9895
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.1329
- type: mrr
value: 76.8303
- type: nAUC_map_max
value: 23.5323
- type: nAUC_map_std
value: 14.7567
- type: nAUC_map_diff1
value: 11.6783
- type: nAUC_mrr_max
value: 32.3309
- type: nAUC_mrr_std
value: 19.1617
- type: nAUC_mrr_diff1
value: 23.508699999999997
- type: main_score
value: 64.1329
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: pearson
value: 90.2058
- type: spearman
value: 88.1641
- type: cosine_pearson
value: 90.2058
- type: cosine_spearman
value: 88.1641
- type: manhattan_pearson
value: 87.7579
- type: manhattan_spearman
value: 87.6249
- type: euclidean_pearson
value: 88.3667
- type: euclidean_spearman
value: 88.1641
- type: main_score
value: 88.1641
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 77.3247
- type: f1
value: 76.3532
- type: f1_weighted
value: 76.3532
- type: main_score
value: 77.3247
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.018
- type: v_measure_std
value: 0.7512
- type: main_score
value: 39.018
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.8097
- type: v_measure_std
value: 0.9368
- type: main_score
value: 36.8097
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (python)
type: CoIR-Retrieval/CodeSearchNet
config: python
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 85.353
- type: ndcg_at_3
value: 89.493
- type: ndcg_at_5
value: 90.347
- type: ndcg_at_10
value: 90.89699999999999
- type: ndcg_at_20
value: 91.20899999999999
- type: ndcg_at_100
value: 91.506
- type: ndcg_at_1000
value: 91.62400000000001
- type: map_at_1
value: 85.353
- type: map_at_3
value: 88.532
- type: map_at_5
value: 89.008
- type: map_at_10
value: 89.238
- type: map_at_20
value: 89.323
- type: map_at_100
value: 89.366
- type: map_at_1000
value: 89.371
- type: recall_at_1
value: 85.353
- type: recall_at_3
value: 92.251
- type: recall_at_5
value: 94.316
- type: recall_at_10
value: 95.998
- type: recall_at_20
value: 97.238
- type: recall_at_100
value: 98.81400000000001
- type: recall_at_1000
value: 99.725
- type: precision_at_1
value: 85.353
- type: precision_at_3
value: 30.75
- type: precision_at_5
value: 18.863
- type: precision_at_10
value: 9.6
- type: precision_at_20
value: 4.862
- type: precision_at_100
value: 0.988
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 85.3533
- type: mrr_at_3
value: 88.5318
- type: mrr_at_5
value: 89.0077
- type: mrr_at_10
value: 89.2381
- type: mrr_at_20
value: 89.3231
- type: mrr_at_100
value: 89.3659
- type: mrr_at_1000
value: 89.3707
- type: nauc_ndcg_at_1_max
value: 79.05529999999999
- type: nauc_ndcg_at_1_std
value: 6.6982
- type: nauc_ndcg_at_1_diff1
value: 89.6212
- type: nauc_ndcg_at_3_max
value: 82.5612
- type: nauc_ndcg_at_3_std
value: 10.379199999999999
- type: nauc_ndcg_at_3_diff1
value: 87.809
- type: nauc_ndcg_at_5_max
value: 82.4315
- type: nauc_ndcg_at_5_std
value: 10.5113
- type: nauc_ndcg_at_5_diff1
value: 88.0763
- type: nauc_ndcg_at_10_max
value: 82.4135
- type: nauc_ndcg_at_10_std
value: 11.046
- type: nauc_ndcg_at_10_diff1
value: 88.2008
- type: nauc_ndcg_at_20_max
value: 82.3276
- type: nauc_ndcg_at_20_std
value: 11.4306
- type: nauc_ndcg_at_20_diff1
value: 88.2525
- type: nauc_ndcg_at_100_max
value: 82.1023
- type: nauc_ndcg_at_100_std
value: 11.2119
- type: nauc_ndcg_at_100_diff1
value: 88.3149
- type: nauc_ndcg_at_1000_max
value: 81.91720000000001
- type: nauc_ndcg_at_1000_std
value: 10.7203
- type: nauc_ndcg_at_1000_diff1
value: 88.349
- type: nauc_map_at_1_max
value: 79.05529999999999
- type: nauc_map_at_1_std
value: 6.6982
- type: nauc_map_at_1_diff1
value: 89.6212
- type: nauc_map_at_3_max
value: 81.5856
- type: nauc_map_at_3_std
value: 9.3626
- type: nauc_map_at_3_diff1
value: 88.2364
- type: nauc_map_at_5_max
value: 81.4778
- type: nauc_map_at_5_std
value: 9.3662
- type: nauc_map_at_5_diff1
value: 88.3865
- type: nauc_map_at_10_max
value: 81.447
- type: nauc_map_at_10_std
value: 9.5111
- type: nauc_map_at_10_diff1
value: 88.43469999999999
- type: nauc_map_at_20_max
value: 81.4196
- type: nauc_map_at_20_std
value: 9.593
- type: nauc_map_at_20_diff1
value: 88.4473
- type: nauc_map_at_100_max
value: 81.3925
- type: nauc_map_at_100_std
value: 9.5683
- type: nauc_map_at_100_diff1
value: 88.4559
- type: nauc_map_at_1000_max
value: 81.3865
- type: nauc_map_at_1000_std
value: 9.554
- type: nauc_map_at_1000_diff1
value: 88.457
- type: nauc_recall_at_1_max
value: 79.05529999999999
- type: nauc_recall_at_1_std
value: 6.6982
- type: nauc_recall_at_1_diff1
value: 89.6212
- type: nauc_recall_at_3_max
value: 86.56580000000001
- type: nauc_recall_at_3_std
value: 14.5464
- type: nauc_recall_at_3_diff1
value: 86.1047
- type: nauc_recall_at_5_max
value: 87.5044
- type: nauc_recall_at_5_std
value: 16.7155
- type: nauc_recall_at_5_diff1
value: 86.5603
- type: nauc_recall_at_10_max
value: 89.5625
- type: nauc_recall_at_10_std
value: 23.230700000000002
- type: nauc_recall_at_10_diff1
value: 86.8079
- type: nauc_recall_at_20_max
value: 91.7174
- type: nauc_recall_at_20_std
value: 33.203700000000005
- type: nauc_recall_at_20_diff1
value: 86.8468
- type: nauc_recall_at_100_max
value: 95.55160000000001
- type: nauc_recall_at_100_std
value: 53.0169
- type: nauc_recall_at_100_diff1
value: 87.1867
- type: nauc_recall_at_1000_max
value: 97.0907
- type: nauc_recall_at_1000_std
value: 75.0177
- type: nauc_recall_at_1000_diff1
value: 91.3005
- type: nauc_precision_at_1_max
value: 79.05529999999999
- type: nauc_precision_at_1_std
value: 6.6982
- type: nauc_precision_at_1_diff1
value: 89.6212
- type: nauc_precision_at_3_max
value: 86.56580000000001
- type: nauc_precision_at_3_std
value: 14.5464
- type: nauc_precision_at_3_diff1
value: 86.1047
- type: nauc_precision_at_5_max
value: 87.5044
- type: nauc_precision_at_5_std
value: 16.7155
- type: nauc_precision_at_5_diff1
value: 86.5603
- type: nauc_precision_at_10_max
value: 89.5625
- type: nauc_precision_at_10_std
value: 23.230700000000002
- type: nauc_precision_at_10_diff1
value: 86.8079
- type: nauc_precision_at_20_max
value: 91.7174
- type: nauc_precision_at_20_std
value: 33.203700000000005
- type: nauc_precision_at_20_diff1
value: 86.8468
- type: nauc_precision_at_100_max
value: 95.55160000000001
- type: nauc_precision_at_100_std
value: 53.0169
- type: nauc_precision_at_100_diff1
value: 87.1867
- type: nauc_precision_at_1000_max
value: 97.0907
- type: nauc_precision_at_1000_std
value: 75.0177
- type: nauc_precision_at_1000_diff1
value: 91.3005
- type: nauc_mrr_at_1_max
value: 79.05529999999999
- type: nauc_mrr_at_1_std
value: 6.6982
- type: nauc_mrr_at_1_diff1
value: 89.6212
- type: nauc_mrr_at_3_max
value: 81.5856
- type: nauc_mrr_at_3_std
value: 9.3626
- type: nauc_mrr_at_3_diff1
value: 88.2364
- type: nauc_mrr_at_5_max
value: 81.4778
- type: nauc_mrr_at_5_std
value: 9.3662
- type: nauc_mrr_at_5_diff1
value: 88.3865
- type: nauc_mrr_at_10_max
value: 81.447
- type: nauc_mrr_at_10_std
value: 9.5111
- type: nauc_mrr_at_10_diff1
value: 88.43469999999999
- type: nauc_mrr_at_20_max
value: 81.4196
- type: nauc_mrr_at_20_std
value: 9.593
- type: nauc_mrr_at_20_diff1
value: 88.4473
- type: nauc_mrr_at_100_max
value: 81.3925
- type: nauc_mrr_at_100_std
value: 9.5683
- type: nauc_mrr_at_100_diff1
value: 88.4559
- type: nauc_mrr_at_1000_max
value: 81.3865
- type: nauc_mrr_at_1000_std
value: 9.554
- type: nauc_mrr_at_1000_diff1
value: 88.457
- type: main_score
value: 90.89699999999999
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (javascript)
type: CoIR-Retrieval/CodeSearchNet
config: javascript
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 35.46
- type: ndcg_at_3
value: 42.799
- type: ndcg_at_5
value: 44.64
- type: ndcg_at_10
value: 46.54
- type: ndcg_at_20
value: 48.025
- type: ndcg_at_100
value: 50.307
- type: ndcg_at_1000
value: 51.925
- type: map_at_1
value: 35.46
- type: map_at_3
value: 41.016000000000005
- type: map_at_5
value: 42.038
- type: map_at_10
value: 42.825
- type: map_at_20
value: 43.233
- type: map_at_100
value: 43.541999999999994
- type: map_at_1000
value: 43.599
- type: recall_at_1
value: 35.46
- type: recall_at_3
value: 47.949000000000005
- type: recall_at_5
value: 52.416
- type: recall_at_10
value: 58.28
- type: recall_at_20
value: 64.145
- type: recall_at_100
value: 76.542
- type: recall_at_1000
value: 89.547
- type: precision_at_1
value: 35.46
- type: precision_at_3
value: 15.983
- type: precision_at_5
value: 10.483
- type: precision_at_10
value: 5.827999999999999
- type: precision_at_20
value: 3.2070000000000003
- type: precision_at_100
value: 0.765
- type: precision_at_1000
value: 0.09
- type: mrr_at_1
value: 35.460300000000004
- type: mrr_at_3
value: 41.0159
- type: mrr_at_5
value: 42.038399999999996
- type: mrr_at_10
value: 42.8251
- type: mrr_at_20
value: 43.2333
- type: mrr_at_100
value: 43.542199999999994
- type: mrr_at_1000
value: 43.5986
- type: nauc_ndcg_at_1_max
value: 48.2915
- type: nauc_ndcg_at_1_std
value: 2.4132000000000002
- type: nauc_ndcg_at_1_diff1
value: 64.10810000000001
- type: nauc_ndcg_at_3_max
value: 51.357
- type: nauc_ndcg_at_3_std
value: 4.9681999999999995
- type: nauc_ndcg_at_3_diff1
value: 58.012600000000006
- type: nauc_ndcg_at_5_max
value: 51.8888
- type: nauc_ndcg_at_5_std
value: 6.2654000000000005
- type: nauc_ndcg_at_5_diff1
value: 57.103
- type: nauc_ndcg_at_10_max
value: 51.9571
- type: nauc_ndcg_at_10_std
value: 7.446
- type: nauc_ndcg_at_10_diff1
value: 56.505700000000004
- type: nauc_ndcg_at_20_max
value: 51.638799999999996
- type: nauc_ndcg_at_20_std
value: 7.7742
- type: nauc_ndcg_at_20_diff1
value: 55.9805
- type: nauc_ndcg_at_100_max
value: 51.3786
- type: nauc_ndcg_at_100_std
value: 8.1191
- type: nauc_ndcg_at_100_diff1
value: 56.3265
- type: nauc_ndcg_at_1000_max
value: 51.162
- type: nauc_ndcg_at_1000_std
value: 7.6863
- type: nauc_ndcg_at_1000_diff1
value: 56.6531
- type: nauc_map_at_1_max
value: 48.2915
- type: nauc_map_at_1_std
value: 2.4132000000000002
- type: nauc_map_at_1_diff1
value: 64.10810000000001
- type: nauc_map_at_3_max
value: 50.6599
- type: nauc_map_at_3_std
value: 4.3285
- type: nauc_map_at_3_diff1
value: 59.453100000000006
- type: nauc_map_at_5_max
value: 50.9502
- type: nauc_map_at_5_std
value: 5.0428
- type: nauc_map_at_5_diff1
value: 58.9452
- type: nauc_map_at_10_max
value: 50.9749
- type: nauc_map_at_10_std
value: 5.5069
- type: nauc_map_at_10_diff1
value: 58.7167
- type: nauc_map_at_20_max
value: 50.8815
- type: nauc_map_at_20_std
value: 5.5846
- type: nauc_map_at_20_diff1
value: 58.5793
- type: nauc_map_at_100_max
value: 50.8454
- type: nauc_map_at_100_std
value: 5.6249
- type: nauc_map_at_100_diff1
value: 58.6352
- type: nauc_map_at_1000_max
value: 50.8377
- type: nauc_map_at_1000_std
value: 5.6119
- type: nauc_map_at_1000_diff1
value: 58.6477
- type: nauc_recall_at_1_max
value: 48.2915
- type: nauc_recall_at_1_std
value: 2.4132000000000002
- type: nauc_recall_at_1_diff1
value: 64.10810000000001
- type: nauc_recall_at_3_max
value: 53.3613
- type: nauc_recall_at_3_std
value: 6.833699999999999
- type: nauc_recall_at_3_diff1
value: 53.8466
- type: nauc_recall_at_5_max
value: 54.7395
- type: nauc_recall_at_5_std
value: 10.1014
- type: nauc_recall_at_5_diff1
value: 51.520900000000005
- type: nauc_recall_at_10_max
value: 55.125299999999996
- type: nauc_recall_at_10_std
value: 14.277899999999999
- type: nauc_recall_at_10_diff1
value: 49.1874
- type: nauc_recall_at_20_max
value: 54.0194
- type: nauc_recall_at_20_std
value: 16.4329
- type: nauc_recall_at_20_diff1
value: 46.1551
- type: nauc_recall_at_100_max
value: 52.7898
- type: nauc_recall_at_100_std
value: 22.375600000000002
- type: nauc_recall_at_100_diff1
value: 45.351
- type: nauc_recall_at_1000_max
value: 49.0379
- type: nauc_recall_at_1000_std
value: 26.0579
- type: nauc_recall_at_1000_diff1
value: 41.7849
- type: nauc_precision_at_1_max
value: 48.2915
- type: nauc_precision_at_1_std
value: 2.4132000000000002
- type: nauc_precision_at_1_diff1
value: 64.10810000000001
- type: nauc_precision_at_3_max
value: 53.3613
- type: nauc_precision_at_3_std
value: 6.833699999999999
- type: nauc_precision_at_3_diff1
value: 53.8466
- type: nauc_precision_at_5_max
value: 54.7395
- type: nauc_precision_at_5_std
value: 10.1014
- type: nauc_precision_at_5_diff1
value: 51.520900000000005
- type: nauc_precision_at_10_max
value: 55.125299999999996
- type: nauc_precision_at_10_std
value: 14.277899999999999
- type: nauc_precision_at_10_diff1
value: 49.1874
- type: nauc_precision_at_20_max
value: 54.0194
- type: nauc_precision_at_20_std
value: 16.4329
- type: nauc_precision_at_20_diff1
value: 46.1551
- type: nauc_precision_at_100_max
value: 52.7898
- type: nauc_precision_at_100_std
value: 22.375600000000002
- type: nauc_precision_at_100_diff1
value: 45.351
- type: nauc_precision_at_1000_max
value: 49.0379
- type: nauc_precision_at_1000_std
value: 26.0579
- type: nauc_precision_at_1000_diff1
value: 41.7849
- type: nauc_mrr_at_1_max
value: 48.2915
- type: nauc_mrr_at_1_std
value: 2.4132000000000002
- type: nauc_mrr_at_1_diff1
value: 64.10810000000001
- type: nauc_mrr_at_3_max
value: 50.6599
- type: nauc_mrr_at_3_std
value: 4.3285
- type: nauc_mrr_at_3_diff1
value: 59.453100000000006
- type: nauc_mrr_at_5_max
value: 50.9502
- type: nauc_mrr_at_5_std
value: 5.0428
- type: nauc_mrr_at_5_diff1
value: 58.9452
- type: nauc_mrr_at_10_max
value: 50.9749
- type: nauc_mrr_at_10_std
value: 5.5069
- type: nauc_mrr_at_10_diff1
value: 58.7167
- type: nauc_mrr_at_20_max
value: 50.8815
- type: nauc_mrr_at_20_std
value: 5.5846
- type: nauc_mrr_at_20_diff1
value: 58.5793
- type: nauc_mrr_at_100_max
value: 50.8454
- type: nauc_mrr_at_100_std
value: 5.6249
- type: nauc_mrr_at_100_diff1
value: 58.6352
- type: nauc_mrr_at_1000_max
value: 50.8377
- type: nauc_mrr_at_1000_std
value: 5.6119
- type: nauc_mrr_at_1000_diff1
value: 58.6477
- type: main_score
value: 46.54
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (go)
type: CoIR-Retrieval/CodeSearchNet
config: go
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 45.728
- type: ndcg_at_3
value: 54.942
- type: ndcg_at_5
value: 57.19499999999999
- type: ndcg_at_10
value: 59.471
- type: ndcg_at_20
value: 60.888
- type: ndcg_at_100
value: 62.67700000000001
- type: ndcg_at_1000
value: 63.654999999999994
- type: map_at_1
value: 45.728
- type: map_at_3
value: 52.717000000000006
- type: map_at_5
value: 53.968
- type: map_at_10
value: 54.921
- type: map_at_20
value: 55.31
- type: map_at_100
value: 55.555
- type: map_at_1000
value: 55.589999999999996
- type: recall_at_1
value: 45.728
- type: recall_at_3
value: 61.364
- type: recall_at_5
value: 66.83099999999999
- type: recall_at_10
value: 73.8
- type: recall_at_20
value: 79.402
- type: recall_at_100
value: 89.079
- type: recall_at_1000
value: 96.885
- type: precision_at_1
value: 45.728
- type: precision_at_3
value: 20.455000000000002
- type: precision_at_5
value: 13.366
- type: precision_at_10
value: 7.380000000000001
- type: precision_at_20
value: 3.9699999999999998
- type: precision_at_100
value: 0.8909999999999999
- type: precision_at_1000
value: 0.097
- type: mrr_at_1
value: 45.7277
- type: mrr_at_3
value: 52.7169
- type: mrr_at_5
value: 53.9678
- type: mrr_at_10
value: 54.920500000000004
- type: mrr_at_20
value: 55.3099
- type: mrr_at_100
value: 55.5546
- type: mrr_at_1000
value: 55.5896
- type: nauc_ndcg_at_1_max
value: 40.5391
- type: nauc_ndcg_at_1_std
value: -2.9052000000000002
- type: nauc_ndcg_at_1_diff1
value: 63.2351
- type: nauc_ndcg_at_3_max
value: 43.8365
- type: nauc_ndcg_at_3_std
value: -0.6831
- type: nauc_ndcg_at_3_diff1
value: 57.782599999999995
- type: nauc_ndcg_at_5_max
value: 43.851600000000005
- type: nauc_ndcg_at_5_std
value: -0.3032
- type: nauc_ndcg_at_5_diff1
value: 57.0763
- type: nauc_ndcg_at_10_max
value: 44.1492
- type: nauc_ndcg_at_10_std
value: 0.6748
- type: nauc_ndcg_at_10_diff1
value: 56.8967
- type: nauc_ndcg_at_20_max
value: 44.1367
- type: nauc_ndcg_at_20_std
value: 0.8896
- type: nauc_ndcg_at_20_diff1
value: 56.97560000000001
- type: nauc_ndcg_at_100_max
value: 43.9934
- type: nauc_ndcg_at_100_std
value: 1.0534
- type: nauc_ndcg_at_100_diff1
value: 57.347899999999996
- type: nauc_ndcg_at_1000_max
value: 43.8679
- type: nauc_ndcg_at_1000_std
value: 0.6431
- type: nauc_ndcg_at_1000_diff1
value: 57.6967
- type: nauc_map_at_1_max
value: 40.5391
- type: nauc_map_at_1_std
value: -2.9052000000000002
- type: nauc_map_at_1_diff1
value: 63.2351
- type: nauc_map_at_3_max
value: 43.0286
- type: nauc_map_at_3_std
value: -1.2933
- type: nauc_map_at_3_diff1
value: 59.065
- type: nauc_map_at_5_max
value: 43.0224
- type: nauc_map_at_5_std
value: -1.1081
- type: nauc_map_at_5_diff1
value: 58.7146
- type: nauc_map_at_10_max
value: 43.127500000000005
- type: nauc_map_at_10_std
value: -0.7247
- type: nauc_map_at_10_diff1
value: 58.6619
- type: nauc_map_at_20_max
value: 43.1213
- type: nauc_map_at_20_std
value: -0.6853
- type: nauc_map_at_20_diff1
value: 58.704299999999996
- type: nauc_map_at_100_max
value: 43.0908
- type: nauc_map_at_100_std
value: -0.6792
- type: nauc_map_at_100_diff1
value: 58.7592
- type: nauc_map_at_1000_max
value: 43.085499999999996
- type: nauc_map_at_1000_std
value: -0.6897
- type: nauc_map_at_1000_diff1
value: 58.7689
- type: nauc_recall_at_1_max
value: 40.5391
- type: nauc_recall_at_1_std
value: -2.9052000000000002
- type: nauc_recall_at_1_diff1
value: 63.2351
- type: nauc_recall_at_3_max
value: 46.3617
- type: nauc_recall_at_3_std
value: 1.2550999999999999
- type: nauc_recall_at_3_diff1
value: 53.7993
- type: nauc_recall_at_5_max
value: 46.6666
- type: nauc_recall_at_5_std
value: 2.5401
- type: nauc_recall_at_5_diff1
value: 51.413799999999995
- type: nauc_recall_at_10_max
value: 48.3645
- type: nauc_recall_at_10_std
value: 6.8622000000000005
- type: nauc_recall_at_10_diff1
value: 49.6971
- type: nauc_recall_at_20_max
value: 49.1074
- type: nauc_recall_at_20_std
value: 9.4846
- type: nauc_recall_at_20_diff1
value: 48.5587
- type: nauc_recall_at_100_max
value: 51.2638
- type: nauc_recall_at_100_std
value: 18.4911
- type: nauc_recall_at_100_diff1
value: 47.2445
- type: nauc_recall_at_1000_max
value: 61.0283
- type: nauc_recall_at_1000_std
value: 31.5949
- type: nauc_recall_at_1000_diff1
value: 47.239599999999996
- type: nauc_precision_at_1_max
value: 40.5391
- type: nauc_precision_at_1_std
value: -2.9052000000000002
- type: nauc_precision_at_1_diff1
value: 63.2351
- type: nauc_precision_at_3_max
value: 46.3617
- type: nauc_precision_at_3_std
value: 1.2550999999999999
- type: nauc_precision_at_3_diff1
value: 53.7993
- type: nauc_precision_at_5_max
value: 46.6666
- type: nauc_precision_at_5_std
value: 2.5401
- type: nauc_precision_at_5_diff1
value: 51.413799999999995
- type: nauc_precision_at_10_max
value: 48.3645
- type: nauc_precision_at_10_std
value: 6.8622000000000005
- type: nauc_precision_at_10_diff1
value: 49.6971
- type: nauc_precision_at_20_max
value: 49.1074
- type: nauc_precision_at_20_std
value: 9.4846
- type: nauc_precision_at_20_diff1
value: 48.5587
- type: nauc_precision_at_100_max
value: 51.2638
- type: nauc_precision_at_100_std
value: 18.4911
- type: nauc_precision_at_100_diff1
value: 47.2445
- type: nauc_precision_at_1000_max
value: 61.0283
- type: nauc_precision_at_1000_std
value: 31.5949
- type: nauc_precision_at_1000_diff1
value: 47.239599999999996
- type: nauc_mrr_at_1_max
value: 40.5391
- type: nauc_mrr_at_1_std
value: -2.9052000000000002
- type: nauc_mrr_at_1_diff1
value: 63.2351
- type: nauc_mrr_at_3_max
value: 43.0286
- type: nauc_mrr_at_3_std
value: -1.2933
- type: nauc_mrr_at_3_diff1
value: 59.065
- type: nauc_mrr_at_5_max
value: 43.0224
- type: nauc_mrr_at_5_std
value: -1.1081
- type: nauc_mrr_at_5_diff1
value: 58.7146
- type: nauc_mrr_at_10_max
value: 43.127500000000005
- type: nauc_mrr_at_10_std
value: -0.7247
- type: nauc_mrr_at_10_diff1
value: 58.6619
- type: nauc_mrr_at_20_max
value: 43.1213
- type: nauc_mrr_at_20_std
value: -0.6853
- type: nauc_mrr_at_20_diff1
value: 58.704299999999996
- type: nauc_mrr_at_100_max
value: 43.0908
- type: nauc_mrr_at_100_std
value: -0.6792
- type: nauc_mrr_at_100_diff1
value: 58.7592
- type: nauc_mrr_at_1000_max
value: 43.085499999999996
- type: nauc_mrr_at_1000_std
value: -0.6897
- type: nauc_mrr_at_1000_diff1
value: 58.7689
- type: main_score
value: 59.471
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (ruby)
type: CoIR-Retrieval/CodeSearchNet
config: ruby
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 38.144
- type: ndcg_at_3
value: 46.086
- type: ndcg_at_5
value: 48.13
- type: ndcg_at_10
value: 50.166
- type: ndcg_at_20
value: 51.672
- type: ndcg_at_100
value: 53.81
- type: ndcg_at_1000
value: 55.401999999999994
- type: map_at_1
value: 38.144
- type: map_at_3
value: 44.118
- type: map_at_5
value: 45.245000000000005
- type: map_at_10
value: 46.061
- type: map_at_20
value: 46.475
- type: map_at_100
value: 46.761
- type: map_at_1000
value: 46.815
- type: recall_at_1
value: 38.144
- type: recall_at_3
value: 51.784
- type: recall_at_5
value: 56.779999999999994
- type: recall_at_10
value: 63.20400000000001
- type: recall_at_20
value: 69.151
- type: recall_at_100
value: 80.809
- type: recall_at_1000
value: 93.65599999999999
- type: precision_at_1
value: 38.144
- type: precision_at_3
value: 17.261000000000003
- type: precision_at_5
value: 11.356
- type: precision_at_10
value: 6.32
- type: precision_at_20
value: 3.458
- type: precision_at_100
value: 0.808
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 38.1443
- type: mrr_at_3
value: 44.1184
- type: mrr_at_5
value: 45.2445
- type: mrr_at_10
value: 46.0607
- type: mrr_at_20
value: 46.475
- type: mrr_at_100
value: 46.7611
- type: mrr_at_1000
value: 46.8146
- type: nauc_ndcg_at_1_max
value: 49.8526
- type: nauc_ndcg_at_1_std
value: 6.944500000000001
- type: nauc_ndcg_at_1_diff1
value: 59.0325
- type: nauc_ndcg_at_3_max
value: 48.8152
- type: nauc_ndcg_at_3_std
value: 6.2506
- type: nauc_ndcg_at_3_diff1
value: 51.7373
- type: nauc_ndcg_at_5_max
value: 48.4399
- type: nauc_ndcg_at_5_std
value: 6.687
- type: nauc_ndcg_at_5_diff1
value: 50.569900000000004
- type: nauc_ndcg_at_10_max
value: 47.2669
- type: nauc_ndcg_at_10_std
value: 6.703
- type: nauc_ndcg_at_10_diff1
value: 49.3867
- type: nauc_ndcg_at_20_max
value: 47.1761
- type: nauc_ndcg_at_20_std
value: 7.0552
- type: nauc_ndcg_at_20_diff1
value: 49.3528
- type: nauc_ndcg_at_100_max
value: 47.196
- type: nauc_ndcg_at_100_std
value: 7.697
- type: nauc_ndcg_at_100_diff1
value: 49.9359
- type: nauc_ndcg_at_1000_max
value: 47.4306
- type: nauc_ndcg_at_1000_std
value: 7.3536
- type: nauc_ndcg_at_1000_diff1
value: 50.365700000000004
- type: nauc_map_at_1_max
value: 49.8526
- type: nauc_map_at_1_std
value: 6.944500000000001
- type: nauc_map_at_1_diff1
value: 59.0325
- type: nauc_map_at_3_max
value: 48.932900000000004
- type: nauc_map_at_3_std
value: 6.285499999999999
- type: nauc_map_at_3_diff1
value: 53.4821
- type: nauc_map_at_5_max
value: 48.709799999999994
- type: nauc_map_at_5_std
value: 6.5305
- type: nauc_map_at_5_diff1
value: 52.8586
- type: nauc_map_at_10_max
value: 48.2504
- type: nauc_map_at_10_std
value: 6.535299999999999
- type: nauc_map_at_10_diff1
value: 52.410000000000004
- type: nauc_map_at_20_max
value: 48.2424
- type: nauc_map_at_20_std
value: 6.6425
- type: nauc_map_at_20_diff1
value: 52.4289
- type: nauc_map_at_100_max
value: 48.254999999999995
- type: nauc_map_at_100_std
value: 6.7272
- type: nauc_map_at_100_diff1
value: 52.517199999999995
- type: nauc_map_at_1000_max
value: 48.2618
- type: nauc_map_at_1000_std
value: 6.7179
- type: nauc_map_at_1000_diff1
value: 52.5296
- type: nauc_recall_at_1_max
value: 49.8526
- type: nauc_recall_at_1_std
value: 6.944500000000001
- type: nauc_recall_at_1_diff1
value: 59.0325
- type: nauc_recall_at_3_max
value: 48.5241
- type: nauc_recall_at_3_std
value: 6.2048
- type: nauc_recall_at_3_diff1
value: 46.5818
- type: nauc_recall_at_5_max
value: 47.6347
- type: nauc_recall_at_5_std
value: 7.290299999999999
- type: nauc_recall_at_5_diff1
value: 43.3392
- type: nauc_recall_at_10_max
value: 43.4268
- type: nauc_recall_at_10_std
value: 7.4028
- type: nauc_recall_at_10_diff1
value: 38.508700000000005
- type: nauc_recall_at_20_max
value: 42.416199999999996
- type: nauc_recall_at_20_std
value: 9.0454
- type: nauc_recall_at_20_diff1
value: 36.9086
- type: nauc_recall_at_100_max
value: 40.23
- type: nauc_recall_at_100_std
value: 15.776000000000002
- type: nauc_recall_at_100_diff1
value: 36.492599999999996
- type: nauc_recall_at_1000_max
value: 36.7611
- type: nauc_recall_at_1000_std
value: 16.9938
- type: nauc_recall_at_1000_diff1
value: 29.5398
- type: nauc_precision_at_1_max
value: 49.8526
- type: nauc_precision_at_1_std
value: 6.944500000000001
- type: nauc_precision_at_1_diff1
value: 59.0325
- type: nauc_precision_at_3_max
value: 48.5241
- type: nauc_precision_at_3_std
value: 6.2048
- type: nauc_precision_at_3_diff1
value: 46.5818
- type: nauc_precision_at_5_max
value: 47.6347
- type: nauc_precision_at_5_std
value: 7.290299999999999
- type: nauc_precision_at_5_diff1
value: 43.3392
- type: nauc_precision_at_10_max
value: 43.4268
- type: nauc_precision_at_10_std
value: 7.4028
- type: nauc_precision_at_10_diff1
value: 38.508700000000005
- type: nauc_precision_at_20_max
value: 42.416199999999996
- type: nauc_precision_at_20_std
value: 9.0454
- type: nauc_precision_at_20_diff1
value: 36.9086
- type: nauc_precision_at_100_max
value: 40.23
- type: nauc_precision_at_100_std
value: 15.776000000000002
- type: nauc_precision_at_100_diff1
value: 36.492599999999996
- type: nauc_precision_at_1000_max
value: 36.7611
- type: nauc_precision_at_1000_std
value: 16.9938
- type: nauc_precision_at_1000_diff1
value: 29.5398
- type: nauc_mrr_at_1_max
value: 49.8526
- type: nauc_mrr_at_1_std
value: 6.944500000000001
- type: nauc_mrr_at_1_diff1
value: 59.0325
- type: nauc_mrr_at_3_max
value: 48.932900000000004
- type: nauc_mrr_at_3_std
value: 6.285499999999999
- type: nauc_mrr_at_3_diff1
value: 53.4821
- type: nauc_mrr_at_5_max
value: 48.709799999999994
- type: nauc_mrr_at_5_std
value: 6.5305
- type: nauc_mrr_at_5_diff1
value: 52.8586
- type: nauc_mrr_at_10_max
value: 48.2504
- type: nauc_mrr_at_10_std
value: 6.535299999999999
- type: nauc_mrr_at_10_diff1
value: 52.410000000000004
- type: nauc_mrr_at_20_max
value: 48.2424
- type: nauc_mrr_at_20_std
value: 6.6425
- type: nauc_mrr_at_20_diff1
value: 52.4289
- type: nauc_mrr_at_100_max
value: 48.254999999999995
- type: nauc_mrr_at_100_std
value: 6.7272
- type: nauc_mrr_at_100_diff1
value: 52.517199999999995
- type: nauc_mrr_at_1000_max
value: 48.2618
- type: nauc_mrr_at_1000_std
value: 6.7179
- type: nauc_mrr_at_1000_diff1
value: 52.5296
- type: main_score
value: 50.166
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (java)
type: CoIR-Retrieval/CodeSearchNet
config: java
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 42.355
- type: ndcg_at_3
value: 50.89
- type: ndcg_at_5
value: 53.089
- type: ndcg_at_10
value: 55.062
- type: ndcg_at_20
value: 56.373
- type: ndcg_at_100
value: 58.268
- type: ndcg_at_1000
value: 59.367999999999995
- type: map_at_1
value: 42.355
- type: map_at_3
value: 48.825
- type: map_at_5
value: 50.05
- type: map_at_10
value: 50.866
- type: map_at_20
value: 51.227999999999994
- type: map_at_100
value: 51.486
- type: map_at_1000
value: 51.525
- type: recall_at_1
value: 42.355
- type: recall_at_3
value: 56.851
- type: recall_at_5
value: 62.173
- type: recall_at_10
value: 68.26100000000001
- type: recall_at_20
value: 73.437
- type: recall_at_100
value: 83.706
- type: recall_at_1000
value: 92.506
- type: precision_at_1
value: 42.355
- type: precision_at_3
value: 18.95
- type: precision_at_5
value: 12.435
- type: precision_at_10
value: 6.8260000000000005
- type: precision_at_20
value: 3.672
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.093
- type: mrr_at_1
value: 42.3551
- type: mrr_at_3
value: 48.8255
- type: mrr_at_5
value: 50.049600000000005
- type: mrr_at_10
value: 50.8665
- type: mrr_at_20
value: 51.227999999999994
- type: mrr_at_100
value: 51.486
- type: mrr_at_1000
value: 51.525200000000005
- type: nauc_ndcg_at_1_max
value: 41.261700000000005
- type: nauc_ndcg_at_1_std
value: -4.1932
- type: nauc_ndcg_at_1_diff1
value: 62.1792
- type: nauc_ndcg_at_3_max
value: 43.6389
- type: nauc_ndcg_at_3_std
value: -2.7453000000000003
- type: nauc_ndcg_at_3_diff1
value: 56.621
- type: nauc_ndcg_at_5_max
value: 43.5895
- type: nauc_ndcg_at_5_std
value: -2.1214
- type: nauc_ndcg_at_5_diff1
value: 55.7216
- type: nauc_ndcg_at_10_max
value: 43.56
- type: nauc_ndcg_at_10_std
value: -1.2124
- type: nauc_ndcg_at_10_diff1
value: 55.1817
- type: nauc_ndcg_at_20_max
value: 43.6918
- type: nauc_ndcg_at_20_std
value: -0.4332
- type: nauc_ndcg_at_20_diff1
value: 54.9887
- type: nauc_ndcg_at_100_max
value: 43.945499999999996
- type: nauc_ndcg_at_100_std
value: 0.3674
- type: nauc_ndcg_at_100_diff1
value: 55.237899999999996
- type: nauc_ndcg_at_1000_max
value: 43.8498
- type: nauc_ndcg_at_1000_std
value: 0.1663
- type: nauc_ndcg_at_1000_diff1
value: 55.6509
- type: nauc_map_at_1_max
value: 41.261700000000005
- type: nauc_map_at_1_std
value: -4.1932
- type: nauc_map_at_1_diff1
value: 62.1792
- type: nauc_map_at_3_max
value: 43.0699
- type: nauc_map_at_3_std
value: -3.1619
- type: nauc_map_at_3_diff1
value: 57.961600000000004
- type: nauc_map_at_5_max
value: 43.0235
- type: nauc_map_at_5_std
value: -2.8471
- type: nauc_map_at_5_diff1
value: 57.492399999999996
- type: nauc_map_at_10_max
value: 43.0155
- type: nauc_map_at_10_std
value: -2.4906
- type: nauc_map_at_10_diff1
value: 57.308899999999994
- type: nauc_map_at_20_max
value: 43.0405
- type: nauc_map_at_20_std
value: -2.299
- type: nauc_map_at_20_diff1
value: 57.262
- type: nauc_map_at_100_max
value: 43.0606
- type: nauc_map_at_100_std
value: -2.2096
- type: nauc_map_at_100_diff1
value: 57.2982
- type: nauc_map_at_1000_max
value: 43.0566
- type: nauc_map_at_1000_std
value: -2.2155
- type: nauc_map_at_1000_diff1
value: 57.312
- type: nauc_recall_at_1_max
value: 41.261700000000005
- type: nauc_recall_at_1_std
value: -4.1932
- type: nauc_recall_at_1_diff1
value: 62.1792
- type: nauc_recall_at_3_max
value: 45.368199999999995
- type: nauc_recall_at_3_std
value: -1.4471
- type: nauc_recall_at_3_diff1
value: 52.5416
- type: nauc_recall_at_5_max
value: 45.421299999999995
- type: nauc_recall_at_5_std
value: 0.3829
- type: nauc_recall_at_5_diff1
value: 49.8591
- type: nauc_recall_at_10_max
value: 45.4698
- type: nauc_recall_at_10_std
value: 3.9899999999999998
- type: nauc_recall_at_10_diff1
value: 47.100500000000004
- type: nauc_recall_at_20_max
value: 46.4998
- type: nauc_recall_at_20_std
value: 8.8468
- type: nauc_recall_at_20_diff1
value: 45.027899999999995
- type: nauc_recall_at_100_max
value: 50.79559999999999
- type: nauc_recall_at_100_std
value: 21.8125
- type: nauc_recall_at_100_diff1
value: 42.735099999999996
- type: nauc_recall_at_1000_max
value: 55.116
- type: nauc_recall_at_1000_std
value: 37.5788
- type: nauc_recall_at_1000_diff1
value: 42.2857
- type: nauc_precision_at_1_max
value: 41.261700000000005
- type: nauc_precision_at_1_std
value: -4.1932
- type: nauc_precision_at_1_diff1
value: 62.1792
- type: nauc_precision_at_3_max
value: 45.368199999999995
- type: nauc_precision_at_3_std
value: -1.4471
- type: nauc_precision_at_3_diff1
value: 52.5416
- type: nauc_precision_at_5_max
value: 45.421299999999995
- type: nauc_precision_at_5_std
value: 0.3829
- type: nauc_precision_at_5_diff1
value: 49.8591
- type: nauc_precision_at_10_max
value: 45.4698
- type: nauc_precision_at_10_std
value: 3.9899999999999998
- type: nauc_precision_at_10_diff1
value: 47.100500000000004
- type: nauc_precision_at_20_max
value: 46.4998
- type: nauc_precision_at_20_std
value: 8.8468
- type: nauc_precision_at_20_diff1
value: 45.027899999999995
- type: nauc_precision_at_100_max
value: 50.79559999999999
- type: nauc_precision_at_100_std
value: 21.8125
- type: nauc_precision_at_100_diff1
value: 42.735099999999996
- type: nauc_precision_at_1000_max
value: 55.116
- type: nauc_precision_at_1000_std
value: 37.5788
- type: nauc_precision_at_1000_diff1
value: 42.2857
- type: nauc_mrr_at_1_max
value: 41.261700000000005
- type: nauc_mrr_at_1_std
value: -4.1932
- type: nauc_mrr_at_1_diff1
value: 62.1792
- type: nauc_mrr_at_3_max
value: 43.0699
- type: nauc_mrr_at_3_std
value: -3.1619
- type: nauc_mrr_at_3_diff1
value: 57.961600000000004
- type: nauc_mrr_at_5_max
value: 43.0235
- type: nauc_mrr_at_5_std
value: -2.8471
- type: nauc_mrr_at_5_diff1
value: 57.492399999999996
- type: nauc_mrr_at_10_max
value: 43.0155
- type: nauc_mrr_at_10_std
value: -2.4906
- type: nauc_mrr_at_10_diff1
value: 57.308899999999994
- type: nauc_mrr_at_20_max
value: 43.0405
- type: nauc_mrr_at_20_std
value: -2.299
- type: nauc_mrr_at_20_diff1
value: 57.262
- type: nauc_mrr_at_100_max
value: 43.0606
- type: nauc_mrr_at_100_std
value: -2.2096
- type: nauc_mrr_at_100_diff1
value: 57.2982
- type: nauc_mrr_at_1000_max
value: 43.0566
- type: nauc_mrr_at_1000_std
value: -2.2155
- type: nauc_mrr_at_1000_diff1
value: 57.312
- type: main_score
value: 55.062
- task:
type: Retrieval
dataset:
name: MTEB COIRCodeSearchNetRetrieval (php)
type: CoIR-Retrieval/CodeSearchNet
config: php
split: test
revision: 4adc7bc41202b5c13543c9c886a25f340634dab3
metrics:
- type: ndcg_at_1
value: 36.835
- type: ndcg_at_3
value: 45.147999999999996
- type: ndcg_at_5
value: 47.497
- type: ndcg_at_10
value: 49.784
- type: ndcg_at_20
value: 51.410999999999994
- type: ndcg_at_100
value: 53.715
- type: ndcg_at_1000
value: 55.102
- type: map_at_1
value: 36.835
- type: map_at_3
value: 43.126
- type: map_at_5
value: 44.429
- type: map_at_10
value: 45.377
- type: map_at_20
value: 45.821
- type: map_at_100
value: 46.139
- type: map_at_1000
value: 46.188
- type: recall_at_1
value: 36.835
- type: recall_at_3
value: 50.992000000000004
- type: recall_at_5
value: 56.693000000000005
- type: recall_at_10
value: 63.743
- type: recall_at_20
value: 70.194
- type: recall_at_100
value: 82.65299999999999
- type: recall_at_1000
value: 93.728
- type: precision_at_1
value: 36.835
- type: precision_at_3
value: 16.997
- type: precision_at_5
value: 11.339
- type: precision_at_10
value: 6.3740000000000006
- type: precision_at_20
value: 3.51
- type: precision_at_100
value: 0.827
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 36.8346
- type: mrr_at_3
value: 43.1259
- type: mrr_at_5
value: 44.4289
- type: mrr_at_10
value: 45.3769
- type: mrr_at_20
value: 45.8215
- type: mrr_at_100
value: 46.138600000000004
- type: mrr_at_1000
value: 46.1881
- type: nauc_ndcg_at_1_max
value: 36.9844
- type: nauc_ndcg_at_1_std
value: -3.2222
- type: nauc_ndcg_at_1_diff1
value: 58.896
- type: nauc_ndcg_at_3_max
value: 37.6355
- type: nauc_ndcg_at_3_std
value: -2.2689
- type: nauc_ndcg_at_3_diff1
value: 52.771100000000004
- type: nauc_ndcg_at_5_max
value: 38.175599999999996
- type: nauc_ndcg_at_5_std
value: -1.5131999999999999
- type: nauc_ndcg_at_5_diff1
value: 52.0101
- type: nauc_ndcg_at_10_max
value: 38.2873
- type: nauc_ndcg_at_10_std
value: -0.5444
- type: nauc_ndcg_at_10_diff1
value: 51.3992
- type: nauc_ndcg_at_20_max
value: 38.324200000000005
- type: nauc_ndcg_at_20_std
value: 0.1328
- type: nauc_ndcg_at_20_diff1
value: 51.2346
- type: nauc_ndcg_at_100_max
value: 38.6313
- type: nauc_ndcg_at_100_std
value: 0.9426
- type: nauc_ndcg_at_100_diff1
value: 51.65729999999999
- type: nauc_ndcg_at_1000_max
value: 38.6274
- type: nauc_ndcg_at_1000_std
value: 0.69
- type: nauc_ndcg_at_1000_diff1
value: 52.1029
- type: nauc_map_at_1_max
value: 36.9844
- type: nauc_map_at_1_std
value: -3.2222
- type: nauc_map_at_1_diff1
value: 58.896
- type: nauc_map_at_3_max
value: 37.523
- type: nauc_map_at_3_std
value: -2.5115
- type: nauc_map_at_3_diff1
value: 54.17960000000001
- type: nauc_map_at_5_max
value: 37.8191
- type: nauc_map_at_5_std
value: -2.1073
- type: nauc_map_at_5_diff1
value: 53.780499999999996
- type: nauc_map_at_10_max
value: 37.8581
- type: nauc_map_at_10_std
value: -1.7191999999999998
- type: nauc_map_at_10_diff1
value: 53.541700000000006
- type: nauc_map_at_20_max
value: 37.8684
- type: nauc_map_at_20_std
value: -1.5565
- type: nauc_map_at_20_diff1
value: 53.5155
- type: nauc_map_at_100_max
value: 37.9101
- type: nauc_map_at_100_std
value: -1.4577
- type: nauc_map_at_100_diff1
value: 53.5894
- type: nauc_map_at_1000_max
value: 37.9109
- type: nauc_map_at_1000_std
value: -1.4617
- type: nauc_map_at_1000_diff1
value: 53.6044
- type: nauc_recall_at_1_max
value: 36.9844
- type: nauc_recall_at_1_std
value: -3.2222
- type: nauc_recall_at_1_diff1
value: 58.896
- type: nauc_recall_at_3_max
value: 37.9468
- type: nauc_recall_at_3_std
value: -1.5512
- type: nauc_recall_at_3_diff1
value: 48.6655
- type: nauc_recall_at_5_max
value: 39.3342
- type: nauc_recall_at_5_std
value: 0.44739999999999996
- type: nauc_recall_at_5_diff1
value: 46.475100000000005
- type: nauc_recall_at_10_max
value: 39.8619
- type: nauc_recall_at_10_std
value: 4.0042
- type: nauc_recall_at_10_diff1
value: 43.8251
- type: nauc_recall_at_20_max
value: 40.226299999999995
- type: nauc_recall_at_20_std
value: 8.052299999999999
- type: nauc_recall_at_20_diff1
value: 41.937400000000004
- type: nauc_recall_at_100_max
value: 44.221
- type: nauc_recall_at_100_std
value: 20.433699999999998
- type: nauc_recall_at_100_diff1
value: 40.745599999999996
- type: nauc_recall_at_1000_max
value: 52.6045
- type: nauc_recall_at_1000_std
value: 40.3497
- type: nauc_recall_at_1000_diff1
value: 40.248
- type: nauc_precision_at_1_max
value: 36.9844
- type: nauc_precision_at_1_std
value: -3.2222
- type: nauc_precision_at_1_diff1
value: 58.896
- type: nauc_precision_at_3_max
value: 37.9468
- type: nauc_precision_at_3_std
value: -1.5512
- type: nauc_precision_at_3_diff1
value: 48.6655
- type: nauc_precision_at_5_max
value: 39.3342
- type: nauc_precision_at_5_std
value: 0.44739999999999996
- type: nauc_precision_at_5_diff1
value: 46.475100000000005
- type: nauc_precision_at_10_max
value: 39.8619
- type: nauc_precision_at_10_std
value: 4.0042
- type: nauc_precision_at_10_diff1
value: 43.8251
- type: nauc_precision_at_20_max
value: 40.226299999999995
- type: nauc_precision_at_20_std
value: 8.052299999999999
- type: nauc_precision_at_20_diff1
value: 41.937400000000004
- type: nauc_precision_at_100_max
value: 44.221
- type: nauc_precision_at_100_std
value: 20.433699999999998
- type: nauc_precision_at_100_diff1
value: 40.745599999999996
- type: nauc_precision_at_1000_max
value: 52.6045
- type: nauc_precision_at_1000_std
value: 40.3497
- type: nauc_precision_at_1000_diff1
value: 40.248
- type: nauc_mrr_at_1_max
value: 36.9844
- type: nauc_mrr_at_1_std
value: -3.2222
- type: nauc_mrr_at_1_diff1
value: 58.896
- type: nauc_mrr_at_3_max
value: 37.523
- type: nauc_mrr_at_3_std
value: -2.5115
- type: nauc_mrr_at_3_diff1
value: 54.17960000000001
- type: nauc_mrr_at_5_max
value: 37.8191
- type: nauc_mrr_at_5_std
value: -2.1073
- type: nauc_mrr_at_5_diff1
value: 53.780499999999996
- type: nauc_mrr_at_10_max
value: 37.8581
- type: nauc_mrr_at_10_std
value: -1.7191999999999998
- type: nauc_mrr_at_10_diff1
value: 53.541700000000006
- type: nauc_mrr_at_20_max
value: 37.8684
- type: nauc_mrr_at_20_std
value: -1.5565
- type: nauc_mrr_at_20_diff1
value: 53.5155
- type: nauc_mrr_at_100_max
value: 37.9101
- type: nauc_mrr_at_100_std
value: -1.4577
- type: nauc_mrr_at_100_diff1
value: 53.5894
- type: nauc_mrr_at_1000_max
value: 37.9109
- type: nauc_mrr_at_1000_std
value: -1.4617
- type: nauc_mrr_at_1000_diff1
value: 53.6044
- type: main_score
value: 49.784
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: ndcg_at_1
value: 44.206
- type: ndcg_at_3
value: 49.364999999999995
- type: ndcg_at_5
value: 51.429
- type: ndcg_at_10
value: 54.106
- type: ndcg_at_20
value: 56.271
- type: ndcg_at_100
value: 59.33500000000001
- type: ndcg_at_1000
value: 61.015
- type: map_at_1
value: 35.797000000000004
- type: map_at_3
value: 44.137
- type: map_at_5
value: 46.062999999999995
- type: map_at_10
value: 47.793
- type: map_at_20
value: 48.730000000000004
- type: map_at_100
value: 49.422
- type: map_at_1000
value: 49.546
- type: recall_at_1
value: 35.797000000000004
- type: recall_at_3
value: 51.224000000000004
- type: recall_at_5
value: 57.218999999999994
- type: recall_at_10
value: 65.182
- type: recall_at_20
value: 72.76700000000001
- type: recall_at_100
value: 86.654
- type: recall_at_1000
value: 97.131
- type: precision_at_1
value: 44.206
- type: precision_at_3
value: 23.653
- type: precision_at_5
value: 16.91
- type: precision_at_10
value: 10.443
- type: precision_at_20
value: 6.194999999999999
- type: precision_at_100
value: 1.6310000000000002
- type: precision_at_1000
value: 0.214
- type: mrr_at_1
value: 44.206
- type: mrr_at_3
value: 51.430600000000005
- type: mrr_at_5
value: 52.839800000000004
- type: mrr_at_10
value: 53.808
- type: mrr_at_20
value: 54.2585
- type: mrr_at_100
value: 54.540200000000006
- type: mrr_at_1000
value: 54.577799999999996
- type: nauc_ndcg_at_1_max
value: 45.573
- type: nauc_ndcg_at_1_std
value: -5.092300000000001
- type: nauc_ndcg_at_1_diff1
value: 50.8011
- type: nauc_ndcg_at_3_max
value: 44.7194
- type: nauc_ndcg_at_3_std
value: -2.979
- type: nauc_ndcg_at_3_diff1
value: 49.4014
- type: nauc_ndcg_at_5_max
value: 45.9838
- type: nauc_ndcg_at_5_std
value: -2.4417999999999997
- type: nauc_ndcg_at_5_diff1
value: 48.2985
- type: nauc_ndcg_at_10_max
value: 45.6755
- type: nauc_ndcg_at_10_std
value: -2.1826000000000003
- type: nauc_ndcg_at_10_diff1
value: 48.443799999999996
- type: nauc_ndcg_at_20_max
value: 45.967200000000005
- type: nauc_ndcg_at_20_std
value: -0.3553
- type: nauc_ndcg_at_20_diff1
value: 48.0216
- type: nauc_ndcg_at_100_max
value: 46.3459
- type: nauc_ndcg_at_100_std
value: 0.6947
- type: nauc_ndcg_at_100_diff1
value: 48.3313
- type: nauc_ndcg_at_1000_max
value: 46.245599999999996
- type: nauc_ndcg_at_1000_std
value: -0.3032
- type: nauc_ndcg_at_1000_diff1
value: 48.3821
- type: nauc_map_at_1_max
value: 38.896
- type: nauc_map_at_1_std
value: -5.7093
- type: nauc_map_at_1_diff1
value: 54.4608
- type: nauc_map_at_3_max
value: 42.6164
- type: nauc_map_at_3_std
value: -4.6751000000000005
- type: nauc_map_at_3_diff1
value: 52.23759999999999
- type: nauc_map_at_5_max
value: 43.9491
- type: nauc_map_at_5_std
value: -3.8674
- type: nauc_map_at_5_diff1
value: 51.03189999999999
- type: nauc_map_at_10_max
value: 44.4192
- type: nauc_map_at_10_std
value: -3.4564999999999997
- type: nauc_map_at_10_diff1
value: 50.6846
- type: nauc_map_at_20_max
value: 44.8404
- type: nauc_map_at_20_std
value: -2.67
- type: nauc_map_at_20_diff1
value: 50.3892
- type: nauc_map_at_100_max
value: 44.9988
- type: nauc_map_at_100_std
value: -2.4528000000000003
- type: nauc_map_at_100_diff1
value: 50.2602
- type: nauc_map_at_1000_max
value: 45.0043
- type: nauc_map_at_1000_std
value: -2.5084
- type: nauc_map_at_1000_diff1
value: 50.2302
- type: nauc_recall_at_1_max
value: 38.896
- type: nauc_recall_at_1_std
value: -5.7093
- type: nauc_recall_at_1_diff1
value: 54.4608
- type: nauc_recall_at_3_max
value: 40.917500000000004
- type: nauc_recall_at_3_std
value: -2.9875
- type: nauc_recall_at_3_diff1
value: 47.935
- type: nauc_recall_at_5_max
value: 43.578
- type: nauc_recall_at_5_std
value: -0.0832
- type: nauc_recall_at_5_diff1
value: 43.924800000000005
- type: nauc_recall_at_10_max
value: 42.3348
- type: nauc_recall_at_10_std
value: 1.2774
- type: nauc_recall_at_10_diff1
value: 42.5842
- type: nauc_recall_at_20_max
value: 43.4429
- type: nauc_recall_at_20_std
value: 9.6387
- type: nauc_recall_at_20_diff1
value: 40.1222
- type: nauc_recall_at_100_max
value: 47.6245
- type: nauc_recall_at_100_std
value: 28.7436
- type: nauc_recall_at_100_diff1
value: 42.3728
- type: nauc_recall_at_1000_max
value: 57.4835
- type: nauc_recall_at_1000_std
value: 66.6109
- type: nauc_recall_at_1000_diff1
value: 48.025
- type: nauc_precision_at_1_max
value: 45.573
- type: nauc_precision_at_1_std
value: -5.092300000000001
- type: nauc_precision_at_1_diff1
value: 50.8011
- type: nauc_precision_at_3_max
value: 39.7982
- type: nauc_precision_at_3_std
value: 1.3032
- type: nauc_precision_at_3_diff1
value: 26.422600000000003
- type: nauc_precision_at_5_max
value: 36.86
- type: nauc_precision_at_5_std
value: 3.9888
- type: nauc_precision_at_5_diff1
value: 13.4191
- type: nauc_precision_at_10_max
value: 26.663199999999996
- type: nauc_precision_at_10_std
value: 6.388299999999999
- type: nauc_precision_at_10_diff1
value: 2.1197
- type: nauc_precision_at_20_max
value: 19.8196
- type: nauc_precision_at_20_std
value: 9.0818
- type: nauc_precision_at_20_diff1
value: -6.483999999999999
- type: nauc_precision_at_100_max
value: 5.6951
- type: nauc_precision_at_100_std
value: 5.3285
- type: nauc_precision_at_100_diff1
value: -17.9036
- type: nauc_precision_at_1000_max
value: -9.107999999999999
- type: nauc_precision_at_1000_std
value: -7.5626999999999995
- type: nauc_precision_at_1000_diff1
value: -27.7189
- type: nauc_mrr_at_1_max
value: 45.573
- type: nauc_mrr_at_1_std
value: -5.092300000000001
- type: nauc_mrr_at_1_diff1
value: 50.8011
- type: nauc_mrr_at_3_max
value: 46.394800000000004
- type: nauc_mrr_at_3_std
value: -3.6457
- type: nauc_mrr_at_3_diff1
value: 48.8878
- type: nauc_mrr_at_5_max
value: 46.7342
- type: nauc_mrr_at_5_std
value: -3.2079999999999997
- type: nauc_mrr_at_5_diff1
value: 47.9827
- type: nauc_mrr_at_10_max
value: 46.4047
- type: nauc_mrr_at_10_std
value: -2.9571
- type: nauc_mrr_at_10_diff1
value: 48.036
- type: nauc_mrr_at_20_max
value: 46.3645
- type: nauc_mrr_at_20_std
value: -2.6208
- type: nauc_mrr_at_20_diff1
value: 48.030699999999996
- type: nauc_mrr_at_100_max
value: 46.3951
- type: nauc_mrr_at_100_std
value: -2.693
- type: nauc_mrr_at_100_diff1
value: 48.128
- type: nauc_mrr_at_1000_max
value: 46.403299999999994
- type: nauc_mrr_at_1000_std
value: -2.7043999999999997
- type: nauc_mrr_at_1000_diff1
value: 48.1413
- type: main_score
value: 54.106
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: ndcg_at_1
value: 41.274
- type: ndcg_at_3
value: 46.022999999999996
- type: ndcg_at_5
value: 47.882999999999996
- type: ndcg_at_10
value: 50.251000000000005
- type: ndcg_at_20
value: 51.93
- type: ndcg_at_100
value: 54.725
- type: ndcg_at_1000
value: 56.635000000000005
- type: map_at_1
value: 32.748
- type: map_at_3
value: 40.916000000000004
- type: map_at_5
value: 42.620999999999995
- type: map_at_10
value: 44.138
- type: map_at_20
value: 44.911
- type: map_at_100
value: 45.565
- type: map_at_1000
value: 45.698
- type: recall_at_1
value: 32.748
- type: recall_at_3
value: 47.522999999999996
- type: recall_at_5
value: 52.957
- type: recall_at_10
value: 60.321999999999996
- type: recall_at_20
value: 66.506
- type: recall_at_100
value: 79.669
- type: recall_at_1000
value: 91.73
- type: precision_at_1
value: 41.274
- type: precision_at_3
value: 22.718
- type: precision_at_5
value: 16.064
- type: precision_at_10
value: 9.828000000000001
- type: precision_at_20
value: 5.783
- type: precision_at_100
value: 1.5730000000000002
- type: precision_at_1000
value: 0.202
- type: mrr_at_1
value: 41.273900000000005
- type: mrr_at_3
value: 48.2378
- type: mrr_at_5
value: 49.5626
- type: mrr_at_10
value: 50.459900000000005
- type: mrr_at_20
value: 50.805
- type: mrr_at_100
value: 51.069900000000004
- type: mrr_at_1000
value: 51.1088
- type: nauc_ndcg_at_1_max
value: 44.7657
- type: nauc_ndcg_at_1_std
value: 3.7028
- type: nauc_ndcg_at_1_diff1
value: 52.017199999999995
- type: nauc_ndcg_at_3_max
value: 45.2602
- type: nauc_ndcg_at_3_std
value: 3.9891
- type: nauc_ndcg_at_3_diff1
value: 48.9746
- type: nauc_ndcg_at_5_max
value: 45.0766
- type: nauc_ndcg_at_5_std
value: 4.1764
- type: nauc_ndcg_at_5_diff1
value: 48.5708
- type: nauc_ndcg_at_10_max
value: 45.0325
- type: nauc_ndcg_at_10_std
value: 4.8281
- type: nauc_ndcg_at_10_diff1
value: 47.6424
- type: nauc_ndcg_at_20_max
value: 45.2904
- type: nauc_ndcg_at_20_std
value: 5.739
- type: nauc_ndcg_at_20_diff1
value: 47.7781
- type: nauc_ndcg_at_100_max
value: 45.6547
- type: nauc_ndcg_at_100_std
value: 7.6744
- type: nauc_ndcg_at_100_diff1
value: 47.2483
- type: nauc_ndcg_at_1000_max
value: 45.5879
- type: nauc_ndcg_at_1000_std
value: 7.919
- type: nauc_ndcg_at_1000_diff1
value: 47.172799999999995
- type: nauc_map_at_1_max
value: 35.7481
- type: nauc_map_at_1_std
value: -6.451
- type: nauc_map_at_1_diff1
value: 55.3994
- type: nauc_map_at_3_max
value: 41.4679
- type: nauc_map_at_3_std
value: -2.2265
- type: nauc_map_at_3_diff1
value: 51.9234
- type: nauc_map_at_5_max
value: 42.2532
- type: nauc_map_at_5_std
value: -0.9950000000000001
- type: nauc_map_at_5_diff1
value: 51.172200000000004
- type: nauc_map_at_10_max
value: 43.0496
- type: nauc_map_at_10_std
value: 0.3319
- type: nauc_map_at_10_diff1
value: 50.3961
- type: nauc_map_at_20_max
value: 43.6286
- type: nauc_map_at_20_std
value: 1.2991000000000001
- type: nauc_map_at_20_diff1
value: 50.2938
- type: nauc_map_at_100_max
value: 43.906800000000004
- type: nauc_map_at_100_std
value: 2.1626
- type: nauc_map_at_100_diff1
value: 50.1124
- type: nauc_map_at_1000_max
value: 43.9529
- type: nauc_map_at_1000_std
value: 2.309
- type: nauc_map_at_1000_diff1
value: 50.0859
- type: nauc_recall_at_1_max
value: 35.7481
- type: nauc_recall_at_1_std
value: -6.451
- type: nauc_recall_at_1_diff1
value: 55.3994
- type: nauc_recall_at_3_max
value: 40.739
- type: nauc_recall_at_3_std
value: -0.9688
- type: nauc_recall_at_3_diff1
value: 47.1898
- type: nauc_recall_at_5_max
value: 41.494
- type: nauc_recall_at_5_std
value: 2.1174
- type: nauc_recall_at_5_diff1
value: 44.5816
- type: nauc_recall_at_10_max
value: 41.739
- type: nauc_recall_at_10_std
value: 5.7603
- type: nauc_recall_at_10_diff1
value: 39.9929
- type: nauc_recall_at_20_max
value: 42.9217
- type: nauc_recall_at_20_std
value: 10.6088
- type: nauc_recall_at_20_diff1
value: 39.1455
- type: nauc_recall_at_100_max
value: 45.1375
- type: nauc_recall_at_100_std
value: 25.986700000000003
- type: nauc_recall_at_100_diff1
value: 33.972
- type: nauc_recall_at_1000_max
value: 46.050200000000004
- type: nauc_recall_at_1000_std
value: 44.597300000000004
- type: nauc_recall_at_1000_diff1
value: 26.326100000000004
- type: nauc_precision_at_1_max
value: 44.7657
- type: nauc_precision_at_1_std
value: 3.7028
- type: nauc_precision_at_1_diff1
value: 52.017199999999995
- type: nauc_precision_at_3_max
value: 44.291799999999995
- type: nauc_precision_at_3_std
value: 18.334500000000002
- type: nauc_precision_at_3_diff1
value: 25.625500000000002
- type: nauc_precision_at_5_max
value: 40.8025
- type: nauc_precision_at_5_std
value: 23.6687
- type: nauc_precision_at_5_diff1
value: 16.6574
- type: nauc_precision_at_10_max
value: 35.7196
- type: nauc_precision_at_10_std
value: 29.852099999999997
- type: nauc_precision_at_10_diff1
value: 5.6891
- type: nauc_precision_at_20_max
value: 30.119
- type: nauc_precision_at_20_std
value: 33.204
- type: nauc_precision_at_20_diff1
value: -0.23509999999999998
- type: nauc_precision_at_100_max
value: 18.7797
- type: nauc_precision_at_100_std
value: 38.9405
- type: nauc_precision_at_100_diff1
value: -10.8005
- type: nauc_precision_at_1000_max
value: 9.0466
- type: nauc_precision_at_1000_std
value: 35.3392
- type: nauc_precision_at_1000_diff1
value: -16.3137
- type: nauc_mrr_at_1_max
value: 44.7657
- type: nauc_mrr_at_1_std
value: 3.7028
- type: nauc_mrr_at_1_diff1
value: 52.017199999999995
- type: nauc_mrr_at_3_max
value: 45.8134
- type: nauc_mrr_at_3_std
value: 5.6788
- type: nauc_mrr_at_3_diff1
value: 48.666199999999996
- type: nauc_mrr_at_5_max
value: 45.8823
- type: nauc_mrr_at_5_std
value: 6.4417
- type: nauc_mrr_at_5_diff1
value: 48.1545
- type: nauc_mrr_at_10_max
value: 45.813500000000005
- type: nauc_mrr_at_10_std
value: 6.7535
- type: nauc_mrr_at_10_diff1
value: 47.726400000000005
- type: nauc_mrr_at_20_max
value: 45.792500000000004
- type: nauc_mrr_at_20_std
value: 6.8521
- type: nauc_mrr_at_20_diff1
value: 47.7553
- type: nauc_mrr_at_100_max
value: 45.8482
- type: nauc_mrr_at_100_std
value: 6.979399999999999
- type: nauc_mrr_at_100_diff1
value: 47.7743
- type: nauc_mrr_at_1000_max
value: 45.8456
- type: nauc_mrr_at_1000_std
value: 6.9712
- type: nauc_mrr_at_1000_diff1
value: 47.7803
- type: main_score
value: 50.251000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: ndcg_at_1
value: 47.147
- type: ndcg_at_3
value: 53.969
- type: ndcg_at_5
value: 56.743
- type: ndcg_at_10
value: 59.318000000000005
- type: ndcg_at_20
value: 60.897999999999996
- type: ndcg_at_100
value: 62.971999999999994
- type: ndcg_at_1000
value: 64.033
- type: map_at_1
value: 41.126000000000005
- type: map_at_3
value: 50.388999999999996
- type: map_at_5
value: 52.286
- type: map_at_10
value: 53.661
- type: map_at_20
value: 54.228
- type: map_at_100
value: 54.588
- type: map_at_1000
value: 54.638
- type: recall_at_1
value: 41.126000000000005
- type: recall_at_3
value: 58.374
- type: recall_at_5
value: 65.226
- type: recall_at_10
value: 72.69099999999999
- type: recall_at_20
value: 78.62
- type: recall_at_100
value: 88.69200000000001
- type: recall_at_1000
value: 96.232
- type: precision_at_1
value: 47.147
- type: precision_at_3
value: 24.159
- type: precision_at_5
value: 16.577
- type: precision_at_10
value: 9.549000000000001
- type: precision_at_20
value: 5.276
- type: precision_at_100
value: 1.224
- type: precision_at_1000
value: 0.135
- type: mrr_at_1
value: 47.147299999999994
- type: mrr_at_3
value: 54.4305
- type: mrr_at_5
value: 55.95719999999999
- type: mrr_at_10
value: 56.8499
- type: mrr_at_20
value: 57.230000000000004
- type: mrr_at_100
value: 57.4584
- type: mrr_at_1000
value: 57.4867
- type: nauc_ndcg_at_1_max
value: 43.5129
- type: nauc_ndcg_at_1_std
value: -3.5116
- type: nauc_ndcg_at_1_diff1
value: 52.717000000000006
- type: nauc_ndcg_at_3_max
value: 43.6514
- type: nauc_ndcg_at_3_std
value: -3.7903
- type: nauc_ndcg_at_3_diff1
value: 48.7913
- type: nauc_ndcg_at_5_max
value: 44.465700000000005
- type: nauc_ndcg_at_5_std
value: -3.3794999999999997
- type: nauc_ndcg_at_5_diff1
value: 48.8527
- type: nauc_ndcg_at_10_max
value: 46.0891
- type: nauc_ndcg_at_10_std
value: -0.5534
- type: nauc_ndcg_at_10_diff1
value: 48.857099999999996
- type: nauc_ndcg_at_20_max
value: 46.1334
- type: nauc_ndcg_at_20_std
value: 0.2072
- type: nauc_ndcg_at_20_diff1
value: 48.8269
- type: nauc_ndcg_at_100_max
value: 46.2793
- type: nauc_ndcg_at_100_std
value: 1.2965
- type: nauc_ndcg_at_100_diff1
value: 48.6421
- type: nauc_ndcg_at_1000_max
value: 46.1606
- type: nauc_ndcg_at_1000_std
value: 0.5259
- type: nauc_ndcg_at_1000_diff1
value: 48.9864
- type: nauc_map_at_1_max
value: 36.4337
- type: nauc_map_at_1_std
value: -5.6848
- type: nauc_map_at_1_diff1
value: 53.42360000000001
- type: nauc_map_at_3_max
value: 41.6669
- type: nauc_map_at_3_std
value: -5.6545
- type: nauc_map_at_3_diff1
value: 49.6128
- type: nauc_map_at_5_max
value: 42.6809
- type: nauc_map_at_5_std
value: -4.9988
- type: nauc_map_at_5_diff1
value: 49.645
- type: nauc_map_at_10_max
value: 43.7393
- type: nauc_map_at_10_std
value: -3.3649
- type: nauc_map_at_10_diff1
value: 49.574
- type: nauc_map_at_20_max
value: 43.9855
- type: nauc_map_at_20_std
value: -2.8590999999999998
- type: nauc_map_at_20_diff1
value: 49.5139
- type: nauc_map_at_100_max
value: 44.0978
- type: nauc_map_at_100_std
value: -2.604
- type: nauc_map_at_100_diff1
value: 49.4857
- type: nauc_map_at_1000_max
value: 44.114399999999996
- type: nauc_map_at_1000_std
value: -2.6081
- type: nauc_map_at_1000_diff1
value: 49.508799999999994
- type: nauc_recall_at_1_max
value: 36.4337
- type: nauc_recall_at_1_std
value: -5.6848
- type: nauc_recall_at_1_diff1
value: 53.42360000000001
- type: nauc_recall_at_3_max
value: 41.320299999999996
- type: nauc_recall_at_3_std
value: -5.7135
- type: nauc_recall_at_3_diff1
value: 45.0436
- type: nauc_recall_at_5_max
value: 43.1656
- type: nauc_recall_at_5_std
value: -3.8888
- type: nauc_recall_at_5_diff1
value: 44.3304
- type: nauc_recall_at_10_max
value: 48.9816
- type: nauc_recall_at_10_std
value: 5.9506000000000006
- type: nauc_recall_at_10_diff1
value: 43.9217
- type: nauc_recall_at_20_max
value: 50.5525
- type: nauc_recall_at_20_std
value: 11.8017
- type: nauc_recall_at_20_diff1
value: 43.4987
- type: nauc_recall_at_100_max
value: 54.654
- type: nauc_recall_at_100_std
value: 31.634800000000002
- type: nauc_recall_at_100_diff1
value: 38.7139
- type: nauc_recall_at_1000_max
value: 62.253
- type: nauc_recall_at_1000_std
value: 42.6522
- type: nauc_recall_at_1000_diff1
value: 38.3715
- type: nauc_precision_at_1_max
value: 43.5129
- type: nauc_precision_at_1_std
value: -3.5116
- type: nauc_precision_at_1_diff1
value: 52.717000000000006
- type: nauc_precision_at_3_max
value: 41.983399999999996
- type: nauc_precision_at_3_std
value: 2.4643
- type: nauc_precision_at_3_diff1
value: 28.185
- type: nauc_precision_at_5_max
value: 39.8061
- type: nauc_precision_at_5_std
value: 6.4715
- type: nauc_precision_at_5_diff1
value: 21.333199999999998
- type: nauc_precision_at_10_max
value: 37.914500000000004
- type: nauc_precision_at_10_std
value: 17.1485
- type: nauc_precision_at_10_diff1
value: 12.6277
- type: nauc_precision_at_20_max
value: 34.0432
- type: nauc_precision_at_20_std
value: 23.0425
- type: nauc_precision_at_20_diff1
value: 5.551699999999999
- type: nauc_precision_at_100_max
value: 26.0405
- type: nauc_precision_at_100_std
value: 28.572599999999998
- type: nauc_precision_at_100_diff1
value: -4.2162
- type: nauc_precision_at_1000_max
value: 20.176099999999998
- type: nauc_precision_at_1000_std
value: 27.293499999999998
- type: nauc_precision_at_1000_diff1
value: -7.4514
- type: nauc_mrr_at_1_max
value: 43.5129
- type: nauc_mrr_at_1_std
value: -3.5116
- type: nauc_mrr_at_1_diff1
value: 52.717000000000006
- type: nauc_mrr_at_3_max
value: 44.9785
- type: nauc_mrr_at_3_std
value: -2.2618
- type: nauc_mrr_at_3_diff1
value: 49.8663
- type: nauc_mrr_at_5_max
value: 45.1749
- type: nauc_mrr_at_5_std
value: -2.1027
- type: nauc_mrr_at_5_diff1
value: 49.8332
- type: nauc_mrr_at_10_max
value: 45.6015
- type: nauc_mrr_at_10_std
value: -1.3832
- type: nauc_mrr_at_10_diff1
value: 49.9586
- type: nauc_mrr_at_20_max
value: 45.535399999999996
- type: nauc_mrr_at_20_std
value: -1.2799
- type: nauc_mrr_at_20_diff1
value: 49.9829
- type: nauc_mrr_at_100_max
value: 45.5168
- type: nauc_mrr_at_100_std
value: -1.2195
- type: nauc_mrr_at_100_diff1
value: 49.9728
- type: nauc_mrr_at_1000_max
value: 45.5076
- type: nauc_mrr_at_1000_std
value: -1.2494
- type: nauc_mrr_at_1000_diff1
value: 49.977
- type: main_score
value: 59.318000000000005
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: ndcg_at_1
value: 30.734
- type: ndcg_at_3
value: 38.672000000000004
- type: ndcg_at_5
value: 40.954
- type: ndcg_at_10
value: 43.564
- type: ndcg_at_20
value: 45.48
- type: ndcg_at_100
value: 48.419000000000004
- type: ndcg_at_1000
value: 50.404
- type: map_at_1
value: 28.464
- type: map_at_3
value: 35.704
- type: map_at_5
value: 37.116
- type: map_at_10
value: 38.279999999999994
- type: map_at_20
value: 38.834
- type: map_at_100
value: 39.277
- type: map_at_1000
value: 39.355000000000004
- type: recall_at_1
value: 28.464
- type: recall_at_3
value: 44.588
- type: recall_at_5
value: 50.031000000000006
- type: recall_at_10
value: 57.621
- type: recall_at_20
value: 64.85499999999999
- type: recall_at_100
value: 79.66
- type: recall_at_1000
value: 94.633
- type: precision_at_1
value: 30.734
- type: precision_at_3
value: 16.497
- type: precision_at_5
value: 11.254
- type: precision_at_10
value: 6.633
- type: precision_at_20
value: 3.757
- type: precision_at_100
value: 0.9560000000000001
- type: precision_at_1000
value: 0.116
- type: mrr_at_1
value: 30.734499999999997
- type: mrr_at_3
value: 38.1356
- type: mrr_at_5
value: 39.3616
- type: mrr_at_10
value: 40.4225
- type: mrr_at_20
value: 40.9334
- type: mrr_at_100
value: 41.297200000000004
- type: mrr_at_1000
value: 41.354600000000005
- type: nauc_ndcg_at_1_max
value: 30.2094
- type: nauc_ndcg_at_1_std
value: -6.9741
- type: nauc_ndcg_at_1_diff1
value: 47.5543
- type: nauc_ndcg_at_3_max
value: 31.4334
- type: nauc_ndcg_at_3_std
value: -4.7826
- type: nauc_ndcg_at_3_diff1
value: 41.1025
- type: nauc_ndcg_at_5_max
value: 32.3557
- type: nauc_ndcg_at_5_std
value: -4.1379
- type: nauc_ndcg_at_5_diff1
value: 40.81
- type: nauc_ndcg_at_10_max
value: 32.3949
- type: nauc_ndcg_at_10_std
value: -2.3524
- type: nauc_ndcg_at_10_diff1
value: 39.5175
- type: nauc_ndcg_at_20_max
value: 31.680500000000002
- type: nauc_ndcg_at_20_std
value: -1.7559000000000002
- type: nauc_ndcg_at_20_diff1
value: 38.1515
- type: nauc_ndcg_at_100_max
value: 31.4167
- type: nauc_ndcg_at_100_std
value: -1.0329
- type: nauc_ndcg_at_100_diff1
value: 37.8268
- type: nauc_ndcg_at_1000_max
value: 31.736900000000002
- type: nauc_ndcg_at_1000_std
value: -1.8415000000000001
- type: nauc_ndcg_at_1000_diff1
value: 39.0335
- type: nauc_map_at_1_max
value: 28.260099999999998
- type: nauc_map_at_1_std
value: -9.0806
- type: nauc_map_at_1_diff1
value: 47.6706
- type: nauc_map_at_3_max
value: 30.551000000000002
- type: nauc_map_at_3_std
value: -6.0257
- type: nauc_map_at_3_diff1
value: 42.8155
- type: nauc_map_at_5_max
value: 31.285800000000002
- type: nauc_map_at_5_std
value: -5.671600000000001
- type: nauc_map_at_5_diff1
value: 42.5887
- type: nauc_map_at_10_max
value: 31.329800000000002
- type: nauc_map_at_10_std
value: -4.8092999999999995
- type: nauc_map_at_10_diff1
value: 41.9856
- type: nauc_map_at_20_max
value: 31.2046
- type: nauc_map_at_20_std
value: -4.612
- type: nauc_map_at_20_diff1
value: 41.658699999999996
- type: nauc_map_at_100_max
value: 31.181399999999996
- type: nauc_map_at_100_std
value: -4.4687
- type: nauc_map_at_100_diff1
value: 41.5836
- type: nauc_map_at_1000_max
value: 31.1979
- type: nauc_map_at_1000_std
value: -4.4772
- type: nauc_map_at_1000_diff1
value: 41.627900000000004
- type: nauc_recall_at_1_max
value: 28.260099999999998
- type: nauc_recall_at_1_std
value: -9.0806
- type: nauc_recall_at_1_diff1
value: 47.6706
- type: nauc_recall_at_3_max
value: 31.129800000000003
- type: nauc_recall_at_3_std
value: -3.2782
- type: nauc_recall_at_3_diff1
value: 35.4529
- type: nauc_recall_at_5_max
value: 33.6541
- type: nauc_recall_at_5_std
value: -1.7704999999999997
- type: nauc_recall_at_5_diff1
value: 34.9944
- type: nauc_recall_at_10_max
value: 33.536100000000005
- type: nauc_recall_at_10_std
value: 3.4567
- type: nauc_recall_at_10_diff1
value: 30.553599999999996
- type: nauc_recall_at_20_max
value: 29.889100000000003
- type: nauc_recall_at_20_std
value: 6.5926
- type: nauc_recall_at_20_diff1
value: 23.217
- type: nauc_recall_at_100_max
value: 27.4646
- type: nauc_recall_at_100_std
value: 15.746199999999998
- type: nauc_recall_at_100_diff1
value: 15.1327
- type: nauc_recall_at_1000_max
value: 32.294200000000004
- type: nauc_recall_at_1000_std
value: 21.6293
- type: nauc_recall_at_1000_diff1
value: 11.265600000000001
- type: nauc_precision_at_1_max
value: 30.2094
- type: nauc_precision_at_1_std
value: -6.9741
- type: nauc_precision_at_1_diff1
value: 47.5543
- type: nauc_precision_at_3_max
value: 34.3053
- type: nauc_precision_at_3_std
value: 0.42760000000000004
- type: nauc_precision_at_3_diff1
value: 33.4827
- type: nauc_precision_at_5_max
value: 35.4035
- type: nauc_precision_at_5_std
value: 2.3141
- type: nauc_precision_at_5_diff1
value: 30.8004
- type: nauc_precision_at_10_max
value: 33.4042
- type: nauc_precision_at_10_std
value: 8.6847
- type: nauc_precision_at_10_diff1
value: 23.558200000000003
- type: nauc_precision_at_20_max
value: 29.015200000000004
- type: nauc_precision_at_20_std
value: 11.3556
- type: nauc_precision_at_20_diff1
value: 15.774099999999999
- type: nauc_precision_at_100_max
value: 16.663700000000002
- type: nauc_precision_at_100_std
value: 14.666100000000002
- type: nauc_precision_at_100_diff1
value: 2.1911
- type: nauc_precision_at_1000_max
value: 7.348599999999999
- type: nauc_precision_at_1000_std
value: 8.8804
- type: nauc_precision_at_1000_diff1
value: -7.026599999999999
- type: nauc_mrr_at_1_max
value: 30.2094
- type: nauc_mrr_at_1_std
value: -6.9741
- type: nauc_mrr_at_1_diff1
value: 47.5543
- type: nauc_mrr_at_3_max
value: 31.831500000000002
- type: nauc_mrr_at_3_std
value: -3.6407000000000003
- type: nauc_mrr_at_3_diff1
value: 42.445
- type: nauc_mrr_at_5_max
value: 32.273
- type: nauc_mrr_at_5_std
value: -3.5416000000000003
- type: nauc_mrr_at_5_diff1
value: 42.5464
- type: nauc_mrr_at_10_max
value: 32.3297
- type: nauc_mrr_at_10_std
value: -2.9149000000000003
- type: nauc_mrr_at_10_diff1
value: 42.0233
- type: nauc_mrr_at_20_max
value: 32.124
- type: nauc_mrr_at_20_std
value: -2.7826
- type: nauc_mrr_at_20_diff1
value: 41.652
- type: nauc_mrr_at_100_max
value: 32.0994
- type: nauc_mrr_at_100_std
value: -2.7182999999999997
- type: nauc_mrr_at_100_diff1
value: 41.6024
- type: nauc_mrr_at_1000_max
value: 32.1058
- type: nauc_mrr_at_1000_std
value: -2.7332
- type: nauc_mrr_at_1000_diff1
value: 41.652899999999995
- type: main_score
value: 43.564
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: ndcg_at_1
value: 22.886
- type: ndcg_at_3
value: 27.864
- type: ndcg_at_5
value: 30.177
- type: ndcg_at_10
value: 32.749
- type: ndcg_at_20
value: 35.343
- type: ndcg_at_100
value: 39.095
- type: ndcg_at_1000
value: 41.656
- type: map_at_1
value: 18.119
- type: map_at_3
value: 24.340999999999998
- type: map_at_5
value: 25.861
- type: map_at_10
value: 27.055
- type: map_at_20
value: 27.855
- type: map_at_100
value: 28.461
- type: map_at_1000
value: 28.577
- type: recall_at_1
value: 18.119
- type: recall_at_3
value: 31.633
- type: recall_at_5
value: 37.532
- type: recall_at_10
value: 44.983000000000004
- type: recall_at_20
value: 54.234
- type: recall_at_100
value: 72.396
- type: recall_at_1000
value: 90.223
- type: precision_at_1
value: 22.886
- type: precision_at_3
value: 13.682
- type: precision_at_5
value: 9.950000000000001
- type: precision_at_10
value: 6.1690000000000005
- type: precision_at_20
value: 3.8120000000000003
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.14300000000000002
- type: mrr_at_1
value: 22.8856
- type: mrr_at_3
value: 29.6642
- type: mrr_at_5
value: 31.107000000000003
- type: mrr_at_10
value: 32.2342
- type: mrr_at_20
value: 32.8971
- type: mrr_at_100
value: 33.2804
- type: mrr_at_1000
value: 33.3395
- type: nauc_ndcg_at_1_max
value: 24.8022
- type: nauc_ndcg_at_1_std
value: -0.5363
- type: nauc_ndcg_at_1_diff1
value: 33.1639
- type: nauc_ndcg_at_3_max
value: 22.0142
- type: nauc_ndcg_at_3_std
value: 0.9467
- type: nauc_ndcg_at_3_diff1
value: 28.9545
- type: nauc_ndcg_at_5_max
value: 21.9949
- type: nauc_ndcg_at_5_std
value: 2.2558000000000002
- type: nauc_ndcg_at_5_diff1
value: 27.4516
- type: nauc_ndcg_at_10_max
value: 21.5958
- type: nauc_ndcg_at_10_std
value: 3.5044
- type: nauc_ndcg_at_10_diff1
value: 26.9835
- type: nauc_ndcg_at_20_max
value: 21.940299999999997
- type: nauc_ndcg_at_20_std
value: 4.6913
- type: nauc_ndcg_at_20_diff1
value: 26.8386
- type: nauc_ndcg_at_100_max
value: 22.4749
- type: nauc_ndcg_at_100_std
value: 6.1636999999999995
- type: nauc_ndcg_at_100_diff1
value: 27.4132
- type: nauc_ndcg_at_1000_max
value: 23.034299999999998
- type: nauc_ndcg_at_1000_std
value: 5.7944
- type: nauc_ndcg_at_1000_diff1
value: 27.3963
- type: nauc_map_at_1_max
value: 21.4135
- type: nauc_map_at_1_std
value: 0.649
- type: nauc_map_at_1_diff1
value: 32.1954
- type: nauc_map_at_3_max
value: 20.8778
- type: nauc_map_at_3_std
value: 1.0705
- type: nauc_map_at_3_diff1
value: 28.5319
- type: nauc_map_at_5_max
value: 21.0234
- type: nauc_map_at_5_std
value: 1.5574
- type: nauc_map_at_5_diff1
value: 27.996399999999998
- type: nauc_map_at_10_max
value: 20.9927
- type: nauc_map_at_10_std
value: 2.2451
- type: nauc_map_at_10_diff1
value: 27.8283
- type: nauc_map_at_20_max
value: 21.16
- type: nauc_map_at_20_std
value: 2.6176999999999997
- type: nauc_map_at_20_diff1
value: 27.7722
- type: nauc_map_at_100_max
value: 21.3551
- type: nauc_map_at_100_std
value: 2.8299000000000003
- type: nauc_map_at_100_diff1
value: 27.8752
- type: nauc_map_at_1000_max
value: 21.3871
- type: nauc_map_at_1000_std
value: 2.7986
- type: nauc_map_at_1000_diff1
value: 27.8709
- type: nauc_recall_at_1_max
value: 21.4135
- type: nauc_recall_at_1_std
value: 0.649
- type: nauc_recall_at_1_diff1
value: 32.1954
- type: nauc_recall_at_3_max
value: 19.3537
- type: nauc_recall_at_3_std
value: 1.4591
- type: nauc_recall_at_3_diff1
value: 25.1911
- type: nauc_recall_at_5_max
value: 19.6154
- type: nauc_recall_at_5_std
value: 3.5305000000000004
- type: nauc_recall_at_5_diff1
value: 22.6218
- type: nauc_recall_at_10_max
value: 18.3048
- type: nauc_recall_at_10_std
value: 6.1244
- type: nauc_recall_at_10_diff1
value: 21.6834
- type: nauc_recall_at_20_max
value: 18.4913
- type: nauc_recall_at_20_std
value: 10.083599999999999
- type: nauc_recall_at_20_diff1
value: 20.502200000000002
- type: nauc_recall_at_100_max
value: 19.0212
- type: nauc_recall_at_100_std
value: 21.8101
- type: nauc_recall_at_100_diff1
value: 21.2653
- type: nauc_recall_at_1000_max
value: 29.3582
- type: nauc_recall_at_1000_std
value: 42.8902
- type: nauc_recall_at_1000_diff1
value: 14.060900000000002
- type: nauc_precision_at_1_max
value: 24.8022
- type: nauc_precision_at_1_std
value: -0.5363
- type: nauc_precision_at_1_diff1
value: 33.1639
- type: nauc_precision_at_3_max
value: 23.9746
- type: nauc_precision_at_3_std
value: 0.9273999999999999
- type: nauc_precision_at_3_diff1
value: 26.0507
- type: nauc_precision_at_5_max
value: 23.5487
- type: nauc_precision_at_5_std
value: 2.8788
- type: nauc_precision_at_5_diff1
value: 22.439799999999998
- type: nauc_precision_at_10_max
value: 21.826999999999998
- type: nauc_precision_at_10_std
value: 5.6201
- type: nauc_precision_at_10_diff1
value: 19.8703
- type: nauc_precision_at_20_max
value: 21.199399999999997
- type: nauc_precision_at_20_std
value: 8.9305
- type: nauc_precision_at_20_diff1
value: 18.043
- type: nauc_precision_at_100_max
value: 17.2345
- type: nauc_precision_at_100_std
value: 10.0714
- type: nauc_precision_at_100_diff1
value: 14.521999999999998
- type: nauc_precision_at_1000_max
value: 7.5709
- type: nauc_precision_at_1000_std
value: 0.2689
- type: nauc_precision_at_1000_diff1
value: 4.4733
- type: nauc_mrr_at_1_max
value: 24.8022
- type: nauc_mrr_at_1_std
value: -0.5363
- type: nauc_mrr_at_1_diff1
value: 33.1639
- type: nauc_mrr_at_3_max
value: 24.435499999999998
- type: nauc_mrr_at_3_std
value: 0.9502999999999999
- type: nauc_mrr_at_3_diff1
value: 30.7875
- type: nauc_mrr_at_5_max
value: 24.7103
- type: nauc_mrr_at_5_std
value: 1.8724999999999998
- type: nauc_mrr_at_5_diff1
value: 30.086000000000002
- type: nauc_mrr_at_10_max
value: 24.5685
- type: nauc_mrr_at_10_std
value: 2.1533
- type: nauc_mrr_at_10_diff1
value: 29.862899999999996
- type: nauc_mrr_at_20_max
value: 24.662100000000002
- type: nauc_mrr_at_20_std
value: 2.3742
- type: nauc_mrr_at_20_diff1
value: 29.751300000000004
- type: nauc_mrr_at_100_max
value: 24.635099999999998
- type: nauc_mrr_at_100_std
value: 2.4393000000000002
- type: nauc_mrr_at_100_diff1
value: 29.741
- type: nauc_mrr_at_1000_max
value: 24.651699999999998
- type: nauc_mrr_at_1000_std
value: 2.4291
- type: nauc_mrr_at_1000_diff1
value: 29.7639
- type: main_score
value: 32.749
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: ndcg_at_1
value: 38.114
- type: ndcg_at_3
value: 42.986000000000004
- type: ndcg_at_5
value: 45.893
- type: ndcg_at_10
value: 48.339999999999996
- type: ndcg_at_20
value: 50.617000000000004
- type: ndcg_at_100
value: 53.861000000000004
- type: ndcg_at_1000
value: 55.701
- type: map_at_1
value: 30.517
- type: map_at_3
value: 38.443
- type: map_at_5
value: 40.685
- type: map_at_10
value: 42.031
- type: map_at_20
value: 42.79
- type: map_at_100
value: 43.415
- type: map_at_1000
value: 43.525000000000006
- type: recall_at_1
value: 30.517
- type: recall_at_3
value: 46.015
- type: recall_at_5
value: 53.801
- type: recall_at_10
value: 61.332
- type: recall_at_20
value: 69.274
- type: recall_at_100
value: 84.051
- type: recall_at_1000
value: 95.826
- type: precision_at_1
value: 38.114
- type: precision_at_3
value: 20.821
- type: precision_at_5
value: 15.034
- type: precision_at_10
value: 8.892999999999999
- type: precision_at_20
value: 5.231
- type: precision_at_100
value: 1.375
- type: precision_at_1000
value: 0.172
- type: mrr_at_1
value: 38.1136
- type: mrr_at_3
value: 45.1716
- type: mrr_at_5
value: 46.8175
- type: mrr_at_10
value: 47.7831
- type: mrr_at_20
value: 48.329
- type: mrr_at_100
value: 48.6471
- type: mrr_at_1000
value: 48.6877
- type: nauc_ndcg_at_1_max
value: 40.1541
- type: nauc_ndcg_at_1_std
value: 1.4596
- type: nauc_ndcg_at_1_diff1
value: 56.6442
- type: nauc_ndcg_at_3_max
value: 38.9776
- type: nauc_ndcg_at_3_std
value: 1.464
- type: nauc_ndcg_at_3_diff1
value: 51.5596
- type: nauc_ndcg_at_5_max
value: 38.8678
- type: nauc_ndcg_at_5_std
value: 2.5537
- type: nauc_ndcg_at_5_diff1
value: 50.522
- type: nauc_ndcg_at_10_max
value: 38.698100000000004
- type: nauc_ndcg_at_10_std
value: 2.7959
- type: nauc_ndcg_at_10_diff1
value: 49.8331
- type: nauc_ndcg_at_20_max
value: 39.7247
- type: nauc_ndcg_at_20_std
value: 4.1737
- type: nauc_ndcg_at_20_diff1
value: 49.5233
- type: nauc_ndcg_at_100_max
value: 40.649
- type: nauc_ndcg_at_100_std
value: 5.7359
- type: nauc_ndcg_at_100_diff1
value: 50.0626
- type: nauc_ndcg_at_1000_max
value: 40.765299999999996
- type: nauc_ndcg_at_1000_std
value: 5.5551
- type: nauc_ndcg_at_1000_diff1
value: 50.3599
- type: nauc_map_at_1_max
value: 35.659
- type: nauc_map_at_1_std
value: -3.8913
- type: nauc_map_at_1_diff1
value: 57.7115
- type: nauc_map_at_3_max
value: 37.3901
- type: nauc_map_at_3_std
value: -0.88
- type: nauc_map_at_3_diff1
value: 52.9203
- type: nauc_map_at_5_max
value: 38.0129
- type: nauc_map_at_5_std
value: 0.1544
- type: nauc_map_at_5_diff1
value: 52.1596
- type: nauc_map_at_10_max
value: 38.3708
- type: nauc_map_at_10_std
value: 0.7947
- type: nauc_map_at_10_diff1
value: 51.909000000000006
- type: nauc_map_at_20_max
value: 38.690200000000004
- type: nauc_map_at_20_std
value: 1.2379
- type: nauc_map_at_20_diff1
value: 51.775000000000006
- type: nauc_map_at_100_max
value: 38.9637
- type: nauc_map_at_100_std
value: 1.5914000000000001
- type: nauc_map_at_100_diff1
value: 51.90820000000001
- type: nauc_map_at_1000_max
value: 38.9784
- type: nauc_map_at_1000_std
value: 1.6184
- type: nauc_map_at_1000_diff1
value: 51.909000000000006
- type: nauc_recall_at_1_max
value: 35.659
- type: nauc_recall_at_1_std
value: -3.8913
- type: nauc_recall_at_1_diff1
value: 57.7115
- type: nauc_recall_at_3_max
value: 34.6073
- type: nauc_recall_at_3_std
value: 0.0162
- type: nauc_recall_at_3_diff1
value: 47.0539
- type: nauc_recall_at_5_max
value: 34.3868
- type: nauc_recall_at_5_std
value: 3.1425
- type: nauc_recall_at_5_diff1
value: 43.1625
- type: nauc_recall_at_10_max
value: 33.6467
- type: nauc_recall_at_10_std
value: 4.1808
- type: nauc_recall_at_10_diff1
value: 39.711600000000004
- type: nauc_recall_at_20_max
value: 36.3449
- type: nauc_recall_at_20_std
value: 9.7358
- type: nauc_recall_at_20_diff1
value: 36.5764
- type: nauc_recall_at_100_max
value: 40.563500000000005
- type: nauc_recall_at_100_std
value: 23.5405
- type: nauc_recall_at_100_diff1
value: 34.2152
- type: nauc_recall_at_1000_max
value: 57.387699999999995
- type: nauc_recall_at_1000_std
value: 50.897999999999996
- type: nauc_recall_at_1000_diff1
value: 32.9321
- type: nauc_precision_at_1_max
value: 40.1541
- type: nauc_precision_at_1_std
value: 1.4596
- type: nauc_precision_at_1_diff1
value: 56.6442
- type: nauc_precision_at_3_max
value: 36.586600000000004
- type: nauc_precision_at_3_std
value: 9.7112
- type: nauc_precision_at_3_diff1
value: 33.8758
- type: nauc_precision_at_5_max
value: 34.1914
- type: nauc_precision_at_5_std
value: 13.7515
- type: nauc_precision_at_5_diff1
value: 24.6272
- type: nauc_precision_at_10_max
value: 30.764999999999997
- type: nauc_precision_at_10_std
value: 16.9823
- type: nauc_precision_at_10_diff1
value: 15.954799999999999
- type: nauc_precision_at_20_max
value: 27.976699999999997
- type: nauc_precision_at_20_std
value: 21.465999999999998
- type: nauc_precision_at_20_diff1
value: 7.0363999999999995
- type: nauc_precision_at_100_max
value: 17.6394
- type: nauc_precision_at_100_std
value: 23.4207
- type: nauc_precision_at_100_diff1
value: -4.0614
- type: nauc_precision_at_1000_max
value: 3.8186999999999998
- type: nauc_precision_at_1000_std
value: 16.0902
- type: nauc_precision_at_1000_diff1
value: -14.5093
- type: nauc_mrr_at_1_max
value: 40.1541
- type: nauc_mrr_at_1_std
value: 1.4596
- type: nauc_mrr_at_1_diff1
value: 56.6442
- type: nauc_mrr_at_3_max
value: 40.4577
- type: nauc_mrr_at_3_std
value: 3.558
- type: nauc_mrr_at_3_diff1
value: 53.0569
- type: nauc_mrr_at_5_max
value: 40.6135
- type: nauc_mrr_at_5_std
value: 4.3164
- type: nauc_mrr_at_5_diff1
value: 52.3585
- type: nauc_mrr_at_10_max
value: 40.6563
- type: nauc_mrr_at_10_std
value: 4.3038
- type: nauc_mrr_at_10_diff1
value: 52.2149
- type: nauc_mrr_at_20_max
value: 40.914
- type: nauc_mrr_at_20_std
value: 4.5423
- type: nauc_mrr_at_20_diff1
value: 52.2729
- type: nauc_mrr_at_100_max
value: 40.8944
- type: nauc_mrr_at_100_std
value: 4.546
- type: nauc_mrr_at_100_diff1
value: 52.315400000000004
- type: nauc_mrr_at_1000_max
value: 40.893499999999996
- type: nauc_mrr_at_1000_std
value: 4.5310999999999995
- type: nauc_mrr_at_1000_diff1
value: 52.337500000000006
- type: main_score
value: 48.339999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: ndcg_at_1
value: 34.247
- type: ndcg_at_3
value: 38.976
- type: ndcg_at_5
value: 41.332
- type: ndcg_at_10
value: 44.065
- type: ndcg_at_20
value: 46.312999999999995
- type: ndcg_at_100
value: 49.434
- type: ndcg_at_1000
value: 51.681999999999995
- type: map_at_1
value: 27.395999999999997
- type: map_at_3
value: 34.782999999999994
- type: map_at_5
value: 36.63
- type: map_at_10
value: 38.043
- type: map_at_20
value: 38.783
- type: map_at_100
value: 39.341
- type: map_at_1000
value: 39.454
- type: recall_at_1
value: 27.395999999999997
- type: recall_at_3
value: 41.785
- type: recall_at_5
value: 48.303000000000004
- type: recall_at_10
value: 56.481
- type: recall_at_20
value: 64.473
- type: recall_at_100
value: 79.012
- type: recall_at_1000
value: 94.182
- type: precision_at_1
value: 34.247
- type: precision_at_3
value: 18.759999999999998
- type: precision_at_5
value: 13.333
- type: precision_at_10
value: 8.059
- type: precision_at_20
value: 4.766
- type: precision_at_100
value: 1.258
- type: precision_at_1000
value: 0.16199999999999998
- type: mrr_at_1
value: 34.2466
- type: mrr_at_3
value: 41.172
- type: mrr_at_5
value: 42.701699999999995
- type: mrr_at_10
value: 43.6807
- type: mrr_at_20
value: 44.1991
- type: mrr_at_100
value: 44.5097
- type: mrr_at_1000
value: 44.5693
- type: nauc_ndcg_at_1_max
value: 38.232
- type: nauc_ndcg_at_1_std
value: 3.374
- type: nauc_ndcg_at_1_diff1
value: 51.223200000000006
- type: nauc_ndcg_at_3_max
value: 38.839800000000004
- type: nauc_ndcg_at_3_std
value: 6.529
- type: nauc_ndcg_at_3_diff1
value: 44.2371
- type: nauc_ndcg_at_5_max
value: 39.0094
- type: nauc_ndcg_at_5_std
value: 8.2202
- type: nauc_ndcg_at_5_diff1
value: 44.8305
- type: nauc_ndcg_at_10_max
value: 40.1918
- type: nauc_ndcg_at_10_std
value: 9.9826
- type: nauc_ndcg_at_10_diff1
value: 43.5034
- type: nauc_ndcg_at_20_max
value: 40.7846
- type: nauc_ndcg_at_20_std
value: 11.0178
- type: nauc_ndcg_at_20_diff1
value: 43.176199999999994
- type: nauc_ndcg_at_100_max
value: 40.5507
- type: nauc_ndcg_at_100_std
value: 13.0203
- type: nauc_ndcg_at_100_diff1
value: 43.2445
- type: nauc_ndcg_at_1000_max
value: 40.8071
- type: nauc_ndcg_at_1000_std
value: 11.7945
- type: nauc_ndcg_at_1000_diff1
value: 43.8587
- type: nauc_map_at_1_max
value: 33.517599999999995
- type: nauc_map_at_1_std
value: -0.7517
- type: nauc_map_at_1_diff1
value: 52.92059999999999
- type: nauc_map_at_3_max
value: 36.8937
- type: nauc_map_at_3_std
value: 4.0335
- type: nauc_map_at_3_diff1
value: 46.4322
- type: nauc_map_at_5_max
value: 37.602000000000004
- type: nauc_map_at_5_std
value: 5.3923
- type: nauc_map_at_5_diff1
value: 46.6764
- type: nauc_map_at_10_max
value: 38.3082
- type: nauc_map_at_10_std
value: 6.483600000000001
- type: nauc_map_at_10_diff1
value: 46.0255
- type: nauc_map_at_20_max
value: 38.655899999999995
- type: nauc_map_at_20_std
value: 6.8814
- type: nauc_map_at_20_diff1
value: 45.8245
- type: nauc_map_at_100_max
value: 38.7492
- type: nauc_map_at_100_std
value: 7.327100000000001
- type: nauc_map_at_100_diff1
value: 45.8365
- type: nauc_map_at_1000_max
value: 38.7584
- type: nauc_map_at_1000_std
value: 7.2851
- type: nauc_map_at_1000_diff1
value: 45.8479
- type: nauc_recall_at_1_max
value: 33.517599999999995
- type: nauc_recall_at_1_std
value: -0.7517
- type: nauc_recall_at_1_diff1
value: 52.92059999999999
- type: nauc_recall_at_3_max
value: 37.0749
- type: nauc_recall_at_3_std
value: 7.466399999999999
- type: nauc_recall_at_3_diff1
value: 39.454
- type: nauc_recall_at_5_max
value: 37.227199999999996
- type: nauc_recall_at_5_std
value: 11.7497
- type: nauc_recall_at_5_diff1
value: 39.402
- type: nauc_recall_at_10_max
value: 39.901199999999996
- type: nauc_recall_at_10_std
value: 16.7381
- type: nauc_recall_at_10_diff1
value: 34.3843
- type: nauc_recall_at_20_max
value: 41.0603
- type: nauc_recall_at_20_std
value: 20.78
- type: nauc_recall_at_20_diff1
value: 32.2975
- type: nauc_recall_at_100_max
value: 38.3499
- type: nauc_recall_at_100_std
value: 38.7219
- type: nauc_recall_at_100_diff1
value: 29.078100000000003
- type: nauc_recall_at_1000_max
value: 48.2277
- type: nauc_recall_at_1000_std
value: 55.4646
- type: nauc_recall_at_1000_diff1
value: 26.919900000000002
- type: nauc_precision_at_1_max
value: 38.232
- type: nauc_precision_at_1_std
value: 3.374
- type: nauc_precision_at_1_diff1
value: 51.223200000000006
- type: nauc_precision_at_3_max
value: 39.8718
- type: nauc_precision_at_3_std
value: 14.112
- type: nauc_precision_at_3_diff1
value: 28.971200000000003
- type: nauc_precision_at_5_max
value: 38.7064
- type: nauc_precision_at_5_std
value: 18.1345
- type: nauc_precision_at_5_diff1
value: 26.5685
- type: nauc_precision_at_10_max
value: 36.4352
- type: nauc_precision_at_10_std
value: 22.331500000000002
- type: nauc_precision_at_10_diff1
value: 17.163600000000002
- type: nauc_precision_at_20_max
value: 33.2221
- type: nauc_precision_at_20_std
value: 24.252000000000002
- type: nauc_precision_at_20_diff1
value: 9.0445
- type: nauc_precision_at_100_max
value: 16.5544
- type: nauc_precision_at_100_std
value: 22.867199999999997
- type: nauc_precision_at_100_diff1
value: -3.8588999999999998
- type: nauc_precision_at_1000_max
value: 1.7690000000000001
- type: nauc_precision_at_1000_std
value: 8.2609
- type: nauc_precision_at_1000_diff1
value: -13.8927
- type: nauc_mrr_at_1_max
value: 38.232
- type: nauc_mrr_at_1_std
value: 3.374
- type: nauc_mrr_at_1_diff1
value: 51.223200000000006
- type: nauc_mrr_at_3_max
value: 40.2699
- type: nauc_mrr_at_3_std
value: 7.6
- type: nauc_mrr_at_3_diff1
value: 45.1804
- type: nauc_mrr_at_5_max
value: 40.1434
- type: nauc_mrr_at_5_std
value: 8.3698
- type: nauc_mrr_at_5_diff1
value: 45.1772
- type: nauc_mrr_at_10_max
value: 40.6102
- type: nauc_mrr_at_10_std
value: 8.9793
- type: nauc_mrr_at_10_diff1
value: 44.6458
- type: nauc_mrr_at_20_max
value: 40.5002
- type: nauc_mrr_at_20_std
value: 9.003
- type: nauc_mrr_at_20_diff1
value: 44.671
- type: nauc_mrr_at_100_max
value: 40.4429
- type: nauc_mrr_at_100_std
value: 9.131
- type: nauc_mrr_at_100_diff1
value: 44.728899999999996
- type: nauc_mrr_at_1000_max
value: 40.4634
- type: nauc_mrr_at_1000_std
value: 9.1018
- type: nauc_mrr_at_1000_diff1
value: 44.7656
- type: main_score
value: 44.065
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: ndcg_at_1
value: 33.917750000000005
- type: ndcg_at_3
value: 39.253750000000004
- type: ndcg_at_5
value: 41.62250000000001
- type: ndcg_at_10
value: 44.29191666666667
- type: ndcg_at_20
value: 46.318083333333334
- type: ndcg_at_100
value: 49.489000000000004
- type: ndcg_at_1000
value: 51.534083333333335
- type: map_at_1
value: 28.50841666666667
- type: map_at_3
value: 35.52141666666667
- type: map_at_5
value: 37.228500000000004
- type: map_at_10
value: 38.61175
- type: map_at_20
value: 39.3125
- type: map_at_100
value: 39.882083333333334
- type: map_at_1000
value: 39.995916666666666
- type: recall_at_1
value: 28.50841666666667
- type: recall_at_3
value: 42.46875000000001
- type: recall_at_5
value: 48.59916666666667
- type: recall_at_10
value: 56.56024999999999
- type: recall_at_20
value: 63.96383333333333
- type: recall_at_100
value: 79.2645
- type: recall_at_1000
value: 93.25150000000002
- type: precision_at_1
value: 33.917750000000005
- type: precision_at_3
value: 18.19558333333333
- type: precision_at_5
value: 12.950166666666668
- type: precision_at_10
value: 7.866333333333333
- type: precision_at_20
value: 4.614749999999999
- type: precision_at_100
value: 1.2374166666666666
- type: precision_at_1000
value: 0.16091666666666668
- type: mrr_at_1
value: 33.917699999999996
- type: mrr_at_3
value: 40.448166666666665
- type: mrr_at_5
value: 41.903483333333334
- type: mrr_at_10
value: 42.944941666666665
- type: mrr_at_20
value: 43.43391666666666
- type: mrr_at_100
value: 43.782399999999996
- type: mrr_at_1000
value: 43.832325
- type: nauc_ndcg_at_1_max
value: 38.768750000000004
- type: nauc_ndcg_at_1_std
value: 0.5314750000000001
- type: nauc_ndcg_at_1_diff1
value: 50.18021666666667
- type: nauc_ndcg_at_3_max
value: 37.73569166666667
- type: nauc_ndcg_at_3_std
value: 1.9756250000000004
- type: nauc_ndcg_at_3_diff1
value: 45.217191666666665
- type: nauc_ndcg_at_5_max
value: 38.19843333333333
- type: nauc_ndcg_at_5_std
value: 2.760133333333333
- type: nauc_ndcg_at_5_diff1
value: 44.559908333333325
- type: nauc_ndcg_at_10_max
value: 38.34826666666667
- type: nauc_ndcg_at_10_std
value: 3.8177249999999994
- type: nauc_ndcg_at_10_diff1
value: 43.772149999999996
- type: nauc_ndcg_at_20_max
value: 38.53288333333333
- type: nauc_ndcg_at_20_std
value: 4.801466666666668
- type: nauc_ndcg_at_20_diff1
value: 43.312774999999995
- type: nauc_ndcg_at_100_max
value: 38.912774999999996
- type: nauc_ndcg_at_100_std
value: 6.39795
- type: nauc_ndcg_at_100_diff1
value: 43.38179166666667
- type: nauc_ndcg_at_1000_max
value: 39.0197
- type: nauc_ndcg_at_1000_std
value: 5.861708333333333
- type: nauc_ndcg_at_1000_diff1
value: 43.78785833333334
- type: nauc_map_at_1_max
value: 34.808508333333336
- type: nauc_map_at_1_std
value: -2.4239916666666663
- type: nauc_map_at_1_diff1
value: 51.88476666666666
- type: nauc_map_at_3_max
value: 36.516549999999995
- type: nauc_map_at_3_std
value: 0.008974999999999955
- type: nauc_map_at_3_diff1
value: 47.11013333333332
- type: nauc_map_at_5_max
value: 37.17583333333333
- type: nauc_map_at_5_std
value: 0.7668083333333334
- type: nauc_map_at_5_diff1
value: 46.496975
- type: nauc_map_at_10_max
value: 37.54620833333333
- type: nauc_map_at_10_std
value: 1.5577166666666666
- type: nauc_map_at_10_diff1
value: 46.02030833333334
- type: nauc_map_at_20_max
value: 37.738058333333335
- type: nauc_map_at_20_std
value: 2.0228750000000004
- type: nauc_map_at_20_diff1
value: 45.837608333333336
- type: nauc_map_at_100_max
value: 37.864575
- type: nauc_map_at_100_std
value: 2.3781916666666665
- type: nauc_map_at_100_diff1
value: 45.818783333333336
- type: nauc_map_at_1000_max
value: 37.8704
- type: nauc_map_at_1000_std
value: 2.403341666666667
- type: nauc_map_at_1000_diff1
value: 45.83103333333333
- type: nauc_recall_at_1_max
value: 34.808508333333336
- type: nauc_recall_at_1_std
value: -2.4239916666666663
- type: nauc_recall_at_1_diff1
value: 51.88476666666666
- type: nauc_recall_at_3_max
value: 35.12659166666666
- type: nauc_recall_at_3_std
value: 1.5866916666666664
- type: nauc_recall_at_3_diff1
value: 41.56113333333334
- type: nauc_recall_at_5_max
value: 36.147058333333334
- type: nauc_recall_at_5_std
value: 3.803583333333333
- type: nauc_recall_at_5_diff1
value: 39.051366666666674
- type: nauc_recall_at_10_max
value: 36.10466666666667
- type: nauc_recall_at_10_std
value: 7.102541666666666
- type: nauc_recall_at_10_diff1
value: 35.79460833333333
- type: nauc_recall_at_20_max
value: 36.25878333333333
- type: nauc_recall_at_20_std
value: 11.494475000000001
- type: nauc_recall_at_20_diff1
value: 33.06425833333333
- type: nauc_recall_at_100_max
value: 38.00966666666667
- type: nauc_recall_at_100_std
value: 27.040050000000004
- type: nauc_recall_at_100_diff1
value: 29.968625
- type: nauc_recall_at_1000_max
value: 45.32993333333334
- type: nauc_recall_at_1000_std
value: 45.327316666666675
- type: nauc_recall_at_1000_diff1
value: 28.088641666666668
- type: nauc_precision_at_1_max
value: 38.768750000000004
- type: nauc_precision_at_1_std
value: 0.5314750000000001
- type: nauc_precision_at_1_diff1
value: 50.18021666666667
- type: nauc_precision_at_3_max
value: 36.52460833333333
- type: nauc_precision_at_3_std
value: 7.665850000000001
- type: nauc_precision_at_3_diff1
value: 31.133191666666672
- type: nauc_precision_at_5_max
value: 35.20106666666667
- type: nauc_precision_at_5_std
value: 10.746766666666666
- type: nauc_precision_at_5_diff1
value: 24.582291666666663
- type: nauc_precision_at_10_max
value: 31.465108333333337
- type: nauc_precision_at_10_std
value: 15.019074999999999
- type: nauc_precision_at_10_diff1
value: 16.25574166666667
- type: nauc_precision_at_20_max
value: 27.589949999999995
- type: nauc_precision_at_20_std
value: 18.108775
- type: nauc_precision_at_20_diff1
value: 9.511666666666668
- type: nauc_precision_at_100_max
value: 17.18691666666667
- type: nauc_precision_at_100_std
value: 21.440466666666666
- type: nauc_precision_at_100_diff1
value: -1.2442166666666667
- type: nauc_precision_at_1000_max
value: 5.215425
- type: nauc_precision_at_1000_std
value: 13.896516666666663
- type: nauc_precision_at_1000_diff1
value: -10.446258333333335
- type: nauc_mrr_at_1_max
value: 38.768750000000004
- type: nauc_mrr_at_1_std
value: 0.5314750000000001
- type: nauc_mrr_at_1_diff1
value: 50.18021666666667
- type: nauc_mrr_at_3_max
value: 38.979308333333336
- type: nauc_mrr_at_3_std
value: 2.755991666666666
- type: nauc_mrr_at_3_diff1
value: 45.991875
- type: nauc_mrr_at_5_max
value: 39.26664166666667
- type: nauc_mrr_at_5_std
value: 3.2105333333333332
- type: nauc_mrr_at_5_diff1
value: 45.54448333333333
- type: nauc_mrr_at_10_max
value: 39.239558333333335
- type: nauc_mrr_at_10_std
value: 3.57125
- type: nauc_mrr_at_10_diff1
value: 45.24083333333333
- type: nauc_mrr_at_20_max
value: 39.212075
- type: nauc_mrr_at_20_std
value: 3.7281833333333334
- type: nauc_mrr_at_20_diff1
value: 45.153083333333335
- type: nauc_mrr_at_100_max
value: 39.221091666666666
- type: nauc_mrr_at_100_std
value: 3.823533333333333
- type: nauc_mrr_at_100_diff1
value: 45.19413333333333
- type: nauc_mrr_at_1000_max
value: 39.22478333333333
- type: nauc_mrr_at_1000_std
value: 3.8052833333333327
- type: nauc_mrr_at_1000_diff1
value: 45.21384166666667
- type: main_score
value: 44.29191666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 44.29191666666667
- type: ndcg_at_10
value: 44.29191666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: ndcg_at_1
value: 29.141000000000002
- type: ndcg_at_3
value: 33.861000000000004
- type: ndcg_at_5
value: 35.887
- type: ndcg_at_10
value: 38.596000000000004
- type: ndcg_at_20
value: 40.172000000000004
- type: ndcg_at_100
value: 43.375
- type: ndcg_at_1000
value: 45.562000000000005
- type: map_at_1
value: 25.728
- type: map_at_3
value: 31.268
- type: map_at_5
value: 32.596000000000004
- type: map_at_10
value: 33.903
- type: map_at_20
value: 34.392
- type: map_at_100
value: 34.853
- type: map_at_1000
value: 34.943999999999996
- type: recall_at_1
value: 25.728
- type: recall_at_3
value: 36.638
- type: recall_at_5
value: 41.689
- type: recall_at_10
value: 50.121
- type: recall_at_20
value: 56.043
- type: recall_at_100
value: 72.382
- type: recall_at_1000
value: 88.306
- type: precision_at_1
value: 29.141000000000002
- type: precision_at_3
value: 14.826
- type: precision_at_5
value: 10.428999999999998
- type: precision_at_10
value: 6.334
- type: precision_at_20
value: 3.589
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.121
- type: mrr_at_1
value: 29.141099999999998
- type: mrr_at_3
value: 34.407
- type: mrr_at_5
value: 35.68
- type: mrr_at_10
value: 36.739
- type: mrr_at_20
value: 37.1572
- type: mrr_at_100
value: 37.5448
- type: mrr_at_1000
value: 37.607600000000005
- type: nauc_ndcg_at_1_max
value: 43.0703
- type: nauc_ndcg_at_1_std
value: 7.8586
- type: nauc_ndcg_at_1_diff1
value: 57.5204
- type: nauc_ndcg_at_3_max
value: 41.7529
- type: nauc_ndcg_at_3_std
value: 8.549800000000001
- type: nauc_ndcg_at_3_diff1
value: 52.7211
- type: nauc_ndcg_at_5_max
value: 43.404399999999995
- type: nauc_ndcg_at_5_std
value: 9.117799999999999
- type: nauc_ndcg_at_5_diff1
value: 52.607400000000005
- type: nauc_ndcg_at_10_max
value: 43.8638
- type: nauc_ndcg_at_10_std
value: 10.7135
- type: nauc_ndcg_at_10_diff1
value: 50.7607
- type: nauc_ndcg_at_20_max
value: 43.3389
- type: nauc_ndcg_at_20_std
value: 11.7901
- type: nauc_ndcg_at_20_diff1
value: 50.056900000000006
- type: nauc_ndcg_at_100_max
value: 43.580600000000004
- type: nauc_ndcg_at_100_std
value: 13.616900000000001
- type: nauc_ndcg_at_100_diff1
value: 49.359700000000004
- type: nauc_ndcg_at_1000_max
value: 43.6164
- type: nauc_ndcg_at_1000_std
value: 13.5428
- type: nauc_ndcg_at_1000_diff1
value: 50.0821
- type: nauc_map_at_1_max
value: 40.5495
- type: nauc_map_at_1_std
value: 3.5229999999999997
- type: nauc_map_at_1_diff1
value: 59.7723
- type: nauc_map_at_3_max
value: 41.2977
- type: nauc_map_at_3_std
value: 6.9411000000000005
- type: nauc_map_at_3_diff1
value: 54.879999999999995
- type: nauc_map_at_5_max
value: 42.5686
- type: nauc_map_at_5_std
value: 7.8032
- type: nauc_map_at_5_diff1
value: 54.4624
- type: nauc_map_at_10_max
value: 43.1361
- type: nauc_map_at_10_std
value: 8.8783
- type: nauc_map_at_10_diff1
value: 53.747
- type: nauc_map_at_20_max
value: 42.9941
- type: nauc_map_at_20_std
value: 9.1777
- type: nauc_map_at_20_diff1
value: 53.5394
- type: nauc_map_at_100_max
value: 42.960300000000004
- type: nauc_map_at_100_std
value: 9.3584
- type: nauc_map_at_100_diff1
value: 53.3856
- type: nauc_map_at_1000_max
value: 42.9595
- type: nauc_map_at_1000_std
value: 9.3575
- type: nauc_map_at_1000_diff1
value: 53.4136
- type: nauc_recall_at_1_max
value: 40.5495
- type: nauc_recall_at_1_std
value: 3.5229999999999997
- type: nauc_recall_at_1_diff1
value: 59.7723
- type: nauc_recall_at_3_max
value: 39.5622
- type: nauc_recall_at_3_std
value: 7.614
- type: nauc_recall_at_3_diff1
value: 49.469
- type: nauc_recall_at_5_max
value: 43.086400000000005
- type: nauc_recall_at_5_std
value: 9.1332
- type: nauc_recall_at_5_diff1
value: 47.8829
- type: nauc_recall_at_10_max
value: 43.054700000000004
- type: nauc_recall_at_10_std
value: 13.116900000000001
- type: nauc_recall_at_10_diff1
value: 40.804
- type: nauc_recall_at_20_max
value: 40.8398
- type: nauc_recall_at_20_std
value: 17.099600000000002
- type: nauc_recall_at_20_diff1
value: 37.8978
- type: nauc_recall_at_100_max
value: 41.8268
- type: nauc_recall_at_100_std
value: 31.5507
- type: nauc_recall_at_100_diff1
value: 28.8246
- type: nauc_recall_at_1000_max
value: 44.7113
- type: nauc_recall_at_1000_std
value: 49.8697
- type: nauc_recall_at_1000_diff1
value: 26.7287
- type: nauc_precision_at_1_max
value: 43.0703
- type: nauc_precision_at_1_std
value: 7.8586
- type: nauc_precision_at_1_diff1
value: 57.5204
- type: nauc_precision_at_3_max
value: 41.098
- type: nauc_precision_at_3_std
value: 16.1082
- type: nauc_precision_at_3_diff1
value: 40.5806
- type: nauc_precision_at_5_max
value: 43.8705
- type: nauc_precision_at_5_std
value: 19.470299999999998
- type: nauc_precision_at_5_diff1
value: 36.9411
- type: nauc_precision_at_10_max
value: 41.5225
- type: nauc_precision_at_10_std
value: 22.9023
- type: nauc_precision_at_10_diff1
value: 28.0016
- type: nauc_precision_at_20_max
value: 36.68
- type: nauc_precision_at_20_std
value: 25.5411
- type: nauc_precision_at_20_diff1
value: 22.3414
- type: nauc_precision_at_100_max
value: 25.8805
- type: nauc_precision_at_100_std
value: 29.0719
- type: nauc_precision_at_100_diff1
value: 7.4353
- type: nauc_precision_at_1000_max
value: 12.2406
- type: nauc_precision_at_1000_std
value: 22.909
- type: nauc_precision_at_1000_diff1
value: -4.0427
- type: nauc_mrr_at_1_max
value: 43.0703
- type: nauc_mrr_at_1_std
value: 7.8586
- type: nauc_mrr_at_1_diff1
value: 57.5204
- type: nauc_mrr_at_3_max
value: 42.4962
- type: nauc_mrr_at_3_std
value: 9.9083
- type: nauc_mrr_at_3_diff1
value: 52.81
- type: nauc_mrr_at_5_max
value: 43.7188
- type: nauc_mrr_at_5_std
value: 10.2951
- type: nauc_mrr_at_5_diff1
value: 52.9848
- type: nauc_mrr_at_10_max
value: 43.6725
- type: nauc_mrr_at_10_std
value: 10.8946
- type: nauc_mrr_at_10_diff1
value: 52.037
- type: nauc_mrr_at_20_max
value: 43.4857
- type: nauc_mrr_at_20_std
value: 11.097700000000001
- type: nauc_mrr_at_20_diff1
value: 51.83560000000001
- type: nauc_mrr_at_100_max
value: 43.4906
- type: nauc_mrr_at_100_std
value: 11.2695
- type: nauc_mrr_at_100_diff1
value: 51.783500000000004
- type: nauc_mrr_at_1000_max
value: 43.490899999999996
- type: nauc_mrr_at_1000_std
value: 11.2507
- type: nauc_mrr_at_1000_diff1
value: 51.8107
- type: main_score
value: 38.596000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: ndcg_at_1
value: 24.054000000000002
- type: ndcg_at_3
value: 29.115999999999996
- type: ndcg_at_5
value: 31.286
- type: ndcg_at_10
value: 33.722
- type: ndcg_at_20
value: 35.844
- type: ndcg_at_100
value: 39.361000000000004
- type: ndcg_at_1000
value: 42.064
- type: map_at_1
value: 19.911
- type: map_at_3
value: 25.874999999999996
- type: map_at_5
value: 27.403
- type: map_at_10
value: 28.559
- type: map_at_20
value: 29.213
- type: map_at_100
value: 29.784
- type: map_at_1000
value: 29.909999999999997
- type: recall_at_1
value: 19.911
- type: recall_at_3
value: 32.195
- type: recall_at_5
value: 37.818000000000005
- type: recall_at_10
value: 45.183
- type: recall_at_20
value: 53.081999999999994
- type: recall_at_100
value: 70.25
- type: recall_at_1000
value: 89.22200000000001
- type: precision_at_1
value: 24.054000000000002
- type: precision_at_3
value: 13.914000000000001
- type: precision_at_5
value: 10.069
- type: precision_at_10
value: 6.194
- type: precision_at_20
value: 3.7060000000000004
- type: precision_at_100
value: 1.058
- type: precision_at_1000
value: 0.148
- type: mrr_at_1
value: 24.0537
- type: mrr_at_3
value: 30.161700000000003
- type: mrr_at_5
value: 31.505499999999998
- type: mrr_at_10
value: 32.4828
- type: mrr_at_20
value: 33.054899999999996
- type: mrr_at_100
value: 33.4643
- type: mrr_at_1000
value: 33.534000000000006
- type: nauc_ndcg_at_1_max
value: 30.663200000000003
- type: nauc_ndcg_at_1_std
value: 1.6019999999999999
- type: nauc_ndcg_at_1_diff1
value: 45.730199999999996
- type: nauc_ndcg_at_3_max
value: 28.5124
- type: nauc_ndcg_at_3_std
value: 3.4572
- type: nauc_ndcg_at_3_diff1
value: 37.109500000000004
- type: nauc_ndcg_at_5_max
value: 28.8788
- type: nauc_ndcg_at_5_std
value: 4.5551
- type: nauc_ndcg_at_5_diff1
value: 36.1603
- type: nauc_ndcg_at_10_max
value: 28.4392
- type: nauc_ndcg_at_10_std
value: 5.1365
- type: nauc_ndcg_at_10_diff1
value: 34.6232
- type: nauc_ndcg_at_20_max
value: 28.4854
- type: nauc_ndcg_at_20_std
value: 6.6366
- type: nauc_ndcg_at_20_diff1
value: 34.5488
- type: nauc_ndcg_at_100_max
value: 29.17
- type: nauc_ndcg_at_100_std
value: 7.904
- type: nauc_ndcg_at_100_diff1
value: 34.7771
- type: nauc_ndcg_at_1000_max
value: 29.437
- type: nauc_ndcg_at_1000_std
value: 7.5479
- type: nauc_ndcg_at_1000_diff1
value: 35.605399999999996
- type: nauc_map_at_1_max
value: 28.6015
- type: nauc_map_at_1_std
value: 1.6265
- type: nauc_map_at_1_diff1
value: 46.170899999999996
- type: nauc_map_at_3_max
value: 27.931099999999997
- type: nauc_map_at_3_std
value: 3.3492
- type: nauc_map_at_3_diff1
value: 39.2592
- type: nauc_map_at_5_max
value: 28.268700000000003
- type: nauc_map_at_5_std
value: 3.9050000000000002
- type: nauc_map_at_5_diff1
value: 38.488299999999995
- type: nauc_map_at_10_max
value: 28.197400000000002
- type: nauc_map_at_10_std
value: 4.1464
- type: nauc_map_at_10_diff1
value: 37.7547
- type: nauc_map_at_20_max
value: 28.27
- type: nauc_map_at_20_std
value: 4.5844000000000005
- type: nauc_map_at_20_diff1
value: 37.7547
- type: nauc_map_at_100_max
value: 28.458
- type: nauc_map_at_100_std
value: 4.786300000000001
- type: nauc_map_at_100_diff1
value: 37.782199999999996
- type: nauc_map_at_1000_max
value: 28.4996
- type: nauc_map_at_1000_std
value: 4.7852
- type: nauc_map_at_1000_diff1
value: 37.816300000000005
- type: nauc_recall_at_1_max
value: 28.6015
- type: nauc_recall_at_1_std
value: 1.6265
- type: nauc_recall_at_1_diff1
value: 46.170899999999996
- type: nauc_recall_at_3_max
value: 25.9988
- type: nauc_recall_at_3_std
value: 4.1643
- type: nauc_recall_at_3_diff1
value: 31.9357
- type: nauc_recall_at_5_max
value: 26.6721
- type: nauc_recall_at_5_std
value: 6.1122000000000005
- type: nauc_recall_at_5_diff1
value: 29.1941
- type: nauc_recall_at_10_max
value: 24.9394
- type: nauc_recall_at_10_std
value: 7.313
- type: nauc_recall_at_10_diff1
value: 24.283099999999997
- type: nauc_recall_at_20_max
value: 24.3242
- type: nauc_recall_at_20_std
value: 12.6805
- type: nauc_recall_at_20_diff1
value: 22.8247
- type: nauc_recall_at_100_max
value: 26.917799999999996
- type: nauc_recall_at_100_std
value: 21.5069
- type: nauc_recall_at_100_diff1
value: 21.205
- type: nauc_recall_at_1000_max
value: 29.8594
- type: nauc_recall_at_1000_std
value: 31.4363
- type: nauc_recall_at_1000_diff1
value: 23.8707
- type: nauc_precision_at_1_max
value: 30.663200000000003
- type: nauc_precision_at_1_std
value: 1.6019999999999999
- type: nauc_precision_at_1_diff1
value: 45.730199999999996
- type: nauc_precision_at_3_max
value: 28.3435
- type: nauc_precision_at_3_std
value: 4.1368
- type: nauc_precision_at_3_diff1
value: 28.5551
- type: nauc_precision_at_5_max
value: 28.49
- type: nauc_precision_at_5_std
value: 5.8044
- type: nauc_precision_at_5_diff1
value: 24.5061
- type: nauc_precision_at_10_max
value: 26.255699999999997
- type: nauc_precision_at_10_std
value: 6.998799999999999
- type: nauc_precision_at_10_diff1
value: 18.3038
- type: nauc_precision_at_20_max
value: 25.217699999999997
- type: nauc_precision_at_20_std
value: 9.9304
- type: nauc_precision_at_20_diff1
value: 15.4876
- type: nauc_precision_at_100_max
value: 21.865499999999997
- type: nauc_precision_at_100_std
value: 10.746500000000001
- type: nauc_precision_at_100_diff1
value: 7.4687
- type: nauc_precision_at_1000_max
value: 18.4782
- type: nauc_precision_at_1000_std
value: 3.0096000000000003
- type: nauc_precision_at_1000_diff1
value: 3.3539
- type: nauc_mrr_at_1_max
value: 30.663200000000003
- type: nauc_mrr_at_1_std
value: 1.6019999999999999
- type: nauc_mrr_at_1_diff1
value: 45.730199999999996
- type: nauc_mrr_at_3_max
value: 29.9128
- type: nauc_mrr_at_3_std
value: 3.4235
- type: nauc_mrr_at_3_diff1
value: 39.1412
- type: nauc_mrr_at_5_max
value: 30.3311
- type: nauc_mrr_at_5_std
value: 4.0177
- type: nauc_mrr_at_5_diff1
value: 38.7065
- type: nauc_mrr_at_10_max
value: 30.144399999999997
- type: nauc_mrr_at_10_std
value: 4.2534
- type: nauc_mrr_at_10_diff1
value: 38.0266
- type: nauc_mrr_at_20_max
value: 30.1249
- type: nauc_mrr_at_20_std
value: 4.6181
- type: nauc_mrr_at_20_diff1
value: 38.002
- type: nauc_mrr_at_100_max
value: 30.1948
- type: nauc_mrr_at_100_std
value: 4.7099
- type: nauc_mrr_at_100_diff1
value: 38.0455
- type: nauc_mrr_at_1000_max
value: 30.1966
- type: nauc_mrr_at_1000_std
value: 4.6948
- type: nauc_mrr_at_1000_diff1
value: 38.0747
- type: main_score
value: 33.722
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: ndcg_at_1
value: 35.168
- type: ndcg_at_3
value: 39.972
- type: ndcg_at_5
value: 42.586
- type: ndcg_at_10
value: 46.071
- type: ndcg_at_20
value: 48.028999999999996
- type: ndcg_at_100
value: 51.351
- type: ndcg_at_1000
value: 53.169999999999995
- type: map_at_1
value: 29.819000000000003
- type: map_at_3
value: 36.571999999999996
- type: map_at_5
value: 38.385999999999996
- type: map_at_10
value: 40.073
- type: map_at_20
value: 40.72
- type: map_at_100
value: 41.289
- type: map_at_1000
value: 41.375
- type: recall_at_1
value: 29.819000000000003
- type: recall_at_3
value: 43.245
- type: recall_at_5
value: 49.931
- type: recall_at_10
value: 60.075
- type: recall_at_20
value: 67.118
- type: recall_at_100
value: 82.771
- type: recall_at_1000
value: 95.219
- type: precision_at_1
value: 35.168
- type: precision_at_3
value: 18.221
- type: precision_at_5
value: 12.892000000000001
- type: precision_at_10
value: 7.985
- type: precision_at_20
value: 4.529
- type: precision_at_100
value: 1.185
- type: precision_at_1000
value: 0.14400000000000002
- type: mrr_at_1
value: 35.1679
- type: mrr_at_3
value: 41.4024
- type: mrr_at_5
value: 43.039500000000004
- type: mrr_at_10
value: 44.3808
- type: mrr_at_20
value: 44.823299999999996
- type: mrr_at_100
value: 45.1914
- type: mrr_at_1000
value: 45.2339
- type: nauc_ndcg_at_1_max
value: 43.9321
- type: nauc_ndcg_at_1_std
value: -6.0145
- type: nauc_ndcg_at_1_diff1
value: 53.6293
- type: nauc_ndcg_at_3_max
value: 42.0025
- type: nauc_ndcg_at_3_std
value: -5.6881
- type: nauc_ndcg_at_3_diff1
value: 47.9461
- type: nauc_ndcg_at_5_max
value: 42.916900000000005
- type: nauc_ndcg_at_5_std
value: -4.2002999999999995
- type: nauc_ndcg_at_5_diff1
value: 48.0738
- type: nauc_ndcg_at_10_max
value: 42.6014
- type: nauc_ndcg_at_10_std
value: -2.8179
- type: nauc_ndcg_at_10_diff1
value: 46.792899999999996
- type: nauc_ndcg_at_20_max
value: 41.9182
- type: nauc_ndcg_at_20_std
value: -2.6714
- type: nauc_ndcg_at_20_diff1
value: 46.111000000000004
- type: nauc_ndcg_at_100_max
value: 42.6218
- type: nauc_ndcg_at_100_std
value: -1.6882000000000001
- type: nauc_ndcg_at_100_diff1
value: 46.3204
- type: nauc_ndcg_at_1000_max
value: 42.6413
- type: nauc_ndcg_at_1000_std
value: -2.2983
- type: nauc_ndcg_at_1000_diff1
value: 46.840399999999995
- type: nauc_map_at_1_max
value: 41.256
- type: nauc_map_at_1_std
value: -7.5877
- type: nauc_map_at_1_diff1
value: 56.383300000000006
- type: nauc_map_at_3_max
value: 41.904
- type: nauc_map_at_3_std
value: -6.548
- type: nauc_map_at_3_diff1
value: 50.7949
- type: nauc_map_at_5_max
value: 42.568400000000004
- type: nauc_map_at_5_std
value: -5.3873999999999995
- type: nauc_map_at_5_diff1
value: 50.3791
- type: nauc_map_at_10_max
value: 42.6619
- type: nauc_map_at_10_std
value: -4.8052
- type: nauc_map_at_10_diff1
value: 49.5933
- type: nauc_map_at_20_max
value: 42.4985
- type: nauc_map_at_20_std
value: -4.7620000000000005
- type: nauc_map_at_20_diff1
value: 49.3214
- type: nauc_map_at_100_max
value: 42.6165
- type: nauc_map_at_100_std
value: -4.595599999999999
- type: nauc_map_at_100_diff1
value: 49.277100000000004
- type: nauc_map_at_1000_max
value: 42.6146
- type: nauc_map_at_1000_std
value: -4.5920000000000005
- type: nauc_map_at_1000_diff1
value: 49.2815
- type: nauc_recall_at_1_max
value: 41.256
- type: nauc_recall_at_1_std
value: -7.5877
- type: nauc_recall_at_1_diff1
value: 56.383300000000006
- type: nauc_recall_at_3_max
value: 39.626099999999994
- type: nauc_recall_at_3_std
value: -5.973
- type: nauc_recall_at_3_diff1
value: 44.651
- type: nauc_recall_at_5_max
value: 41.4392
- type: nauc_recall_at_5_std
value: -1.8328
- type: nauc_recall_at_5_diff1
value: 42.928399999999996
- type: nauc_recall_at_10_max
value: 38.807
- type: nauc_recall_at_10_std
value: 2.863
- type: nauc_recall_at_10_diff1
value: 37.6663
- type: nauc_recall_at_20_max
value: 34.9705
- type: nauc_recall_at_20_std
value: 4.1407
- type: nauc_recall_at_20_diff1
value: 33.6156
- type: nauc_recall_at_100_max
value: 38.4049
- type: nauc_recall_at_100_std
value: 16.7735
- type: nauc_recall_at_100_diff1
value: 30.724800000000002
- type: nauc_recall_at_1000_max
value: 42.9152
- type: nauc_recall_at_1000_std
value: 32.1176
- type: nauc_recall_at_1000_diff1
value: 33.2582
- type: nauc_precision_at_1_max
value: 43.9321
- type: nauc_precision_at_1_std
value: -6.0145
- type: nauc_precision_at_1_diff1
value: 53.6293
- type: nauc_precision_at_3_max
value: 38.1748
- type: nauc_precision_at_3_std
value: -2.3163
- type: nauc_precision_at_3_diff1
value: 31.2502
- type: nauc_precision_at_5_max
value: 36.503
- type: nauc_precision_at_5_std
value: 2.0892
- type: nauc_precision_at_5_diff1
value: 25.249100000000002
- type: nauc_precision_at_10_max
value: 30.2104
- type: nauc_precision_at_10_std
value: 6.6937999999999995
- type: nauc_precision_at_10_diff1
value: 14.0684
- type: nauc_precision_at_20_max
value: 23.6494
- type: nauc_precision_at_20_std
value: 7.216500000000001
- type: nauc_precision_at_20_diff1
value: 6.7953
- type: nauc_precision_at_100_max
value: 11.2361
- type: nauc_precision_at_100_std
value: 11.824
- type: nauc_precision_at_100_diff1
value: -7.6405
- type: nauc_precision_at_1000_max
value: -3.8651
- type: nauc_precision_at_1000_std
value: 5.367999999999999
- type: nauc_precision_at_1000_diff1
value: -17.473
- type: nauc_mrr_at_1_max
value: 43.9321
- type: nauc_mrr_at_1_std
value: -6.0145
- type: nauc_mrr_at_1_diff1
value: 53.6293
- type: nauc_mrr_at_3_max
value: 42.8188
- type: nauc_mrr_at_3_std
value: -5.1393
- type: nauc_mrr_at_3_diff1
value: 48.3128
- type: nauc_mrr_at_5_max
value: 43.5383
- type: nauc_mrr_at_5_std
value: -4.2538
- type: nauc_mrr_at_5_diff1
value: 48.0319
- type: nauc_mrr_at_10_max
value: 43.121700000000004
- type: nauc_mrr_at_10_std
value: -3.7823
- type: nauc_mrr_at_10_diff1
value: 47.6064
- type: nauc_mrr_at_20_max
value: 42.8886
- type: nauc_mrr_at_20_std
value: -3.8175
- type: nauc_mrr_at_20_diff1
value: 47.5437
- type: nauc_mrr_at_100_max
value: 42.9514
- type: nauc_mrr_at_100_std
value: -3.8205000000000005
- type: nauc_mrr_at_100_diff1
value: 47.6513
- type: nauc_mrr_at_1000_max
value: 42.9567
- type: nauc_mrr_at_1000_std
value: -3.8327
- type: nauc_mrr_at_1000_diff1
value: 47.6603
- type: main_score
value: 46.071
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: ndcg_at_1
value: 33.794000000000004
- type: ndcg_at_3
value: 38.442
- type: ndcg_at_5
value: 40.737
- type: ndcg_at_10
value: 43.832
- type: ndcg_at_20
value: 45.589
- type: ndcg_at_100
value: 49.514
- type: ndcg_at_1000
value: 51.742
- type: map_at_1
value: 28.409000000000002
- type: map_at_3
value: 34.337
- type: map_at_5
value: 35.985
- type: map_at_10
value: 37.621
- type: map_at_20
value: 38.391
- type: map_at_100
value: 39.233000000000004
- type: map_at_1000
value: 39.471000000000004
- type: recall_at_1
value: 28.409000000000002
- type: recall_at_3
value: 40.133
- type: recall_at_5
value: 45.913
- type: recall_at_10
value: 55.388000000000005
- type: recall_at_20
value: 62.134
- type: recall_at_100
value: 81.517
- type: recall_at_1000
value: 95.038
- type: precision_at_1
value: 33.794000000000004
- type: precision_at_3
value: 17.787
- type: precision_at_5
value: 13.241
- type: precision_at_10
value: 8.597000000000001
- type: precision_at_20
value: 5.267
- type: precision_at_100
value: 1.652
- type: precision_at_1000
value: 0.251
- type: mrr_at_1
value: 33.7945
- type: mrr_at_3
value: 39.5257
- type: mrr_at_5
value: 41.087
- type: mrr_at_10
value: 42.3491
- type: mrr_at_20
value: 42.7479
- type: mrr_at_100
value: 43.1961
- type: mrr_at_1000
value: 43.2373
- type: nauc_ndcg_at_1_max
value: 43.9886
- type: nauc_ndcg_at_1_std
value: 9.8923
- type: nauc_ndcg_at_1_diff1
value: 50.394000000000005
- type: nauc_ndcg_at_3_max
value: 43.074200000000005
- type: nauc_ndcg_at_3_std
value: 13.5108
- type: nauc_ndcg_at_3_diff1
value: 47.0674
- type: nauc_ndcg_at_5_max
value: 42.810700000000004
- type: nauc_ndcg_at_5_std
value: 14.119499999999999
- type: nauc_ndcg_at_5_diff1
value: 46.822
- type: nauc_ndcg_at_10_max
value: 43.533699999999996
- type: nauc_ndcg_at_10_std
value: 14.009599999999999
- type: nauc_ndcg_at_10_diff1
value: 47.3163
- type: nauc_ndcg_at_20_max
value: 44.4973
- type: nauc_ndcg_at_20_std
value: 14.5044
- type: nauc_ndcg_at_20_diff1
value: 47.2833
- type: nauc_ndcg_at_100_max
value: 44.7593
- type: nauc_ndcg_at_100_std
value: 16.833000000000002
- type: nauc_ndcg_at_100_diff1
value: 47.251599999999996
- type: nauc_ndcg_at_1000_max
value: 44.790600000000005
- type: nauc_ndcg_at_1000_std
value: 15.987199999999998
- type: nauc_ndcg_at_1000_diff1
value: 47.4071
- type: nauc_map_at_1_max
value: 43.4155
- type: nauc_map_at_1_std
value: 6.3514
- type: nauc_map_at_1_diff1
value: 54.8257
- type: nauc_map_at_3_max
value: 43.1906
- type: nauc_map_at_3_std
value: 9.823
- type: nauc_map_at_3_diff1
value: 49.5974
- type: nauc_map_at_5_max
value: 43.1564
- type: nauc_map_at_5_std
value: 10.3498
- type: nauc_map_at_5_diff1
value: 48.7876
- type: nauc_map_at_10_max
value: 43.6805
- type: nauc_map_at_10_std
value: 10.844199999999999
- type: nauc_map_at_10_diff1
value: 48.5759
- type: nauc_map_at_20_max
value: 44.121700000000004
- type: nauc_map_at_20_std
value: 11.6161
- type: nauc_map_at_20_diff1
value: 48.4631
- type: nauc_map_at_100_max
value: 44.1124
- type: nauc_map_at_100_std
value: 12.439
- type: nauc_map_at_100_diff1
value: 48.4742
- type: nauc_map_at_1000_max
value: 44.0146
- type: nauc_map_at_1000_std
value: 12.708
- type: nauc_map_at_1000_diff1
value: 48.5587
- type: nauc_recall_at_1_max
value: 43.4155
- type: nauc_recall_at_1_std
value: 6.3514
- type: nauc_recall_at_1_diff1
value: 54.8257
- type: nauc_recall_at_3_max
value: 40.941300000000005
- type: nauc_recall_at_3_std
value: 12.864700000000001
- type: nauc_recall_at_3_diff1
value: 44.642900000000004
- type: nauc_recall_at_5_max
value: 39.6961
- type: nauc_recall_at_5_std
value: 13.6938
- type: nauc_recall_at_5_diff1
value: 42.142
- type: nauc_recall_at_10_max
value: 40.2068
- type: nauc_recall_at_10_std
value: 14.1258
- type: nauc_recall_at_10_diff1
value: 42.244
- type: nauc_recall_at_20_max
value: 42.7956
- type: nauc_recall_at_20_std
value: 17.518
- type: nauc_recall_at_20_diff1
value: 42.3104
- type: nauc_recall_at_100_max
value: 43.4746
- type: nauc_recall_at_100_std
value: 39.7613
- type: nauc_recall_at_100_diff1
value: 40.5005
- type: nauc_recall_at_1000_max
value: 58.044
- type: nauc_recall_at_1000_std
value: 56.4975
- type: nauc_recall_at_1000_diff1
value: 40.238600000000005
- type: nauc_precision_at_1_max
value: 43.9886
- type: nauc_precision_at_1_std
value: 9.8923
- type: nauc_precision_at_1_diff1
value: 50.394000000000005
- type: nauc_precision_at_3_max
value: 37.436
- type: nauc_precision_at_3_std
value: 19.9652
- type: nauc_precision_at_3_diff1
value: 31.1933
- type: nauc_precision_at_5_max
value: 32.124900000000004
- type: nauc_precision_at_5_std
value: 22.8439
- type: nauc_precision_at_5_diff1
value: 23.325699999999998
- type: nauc_precision_at_10_max
value: 26.956200000000003
- type: nauc_precision_at_10_std
value: 24.7414
- type: nauc_precision_at_10_diff1
value: 15.1951
- type: nauc_precision_at_20_max
value: 20.924799999999998
- type: nauc_precision_at_20_std
value: 27.1802
- type: nauc_precision_at_20_diff1
value: 8.575800000000001
- type: nauc_precision_at_100_max
value: 3.8554
- type: nauc_precision_at_100_std
value: 32.46
- type: nauc_precision_at_100_diff1
value: 1.1094
- type: nauc_precision_at_1000_max
value: -4.0572
- type: nauc_precision_at_1000_std
value: 29.813499999999998
- type: nauc_precision_at_1000_diff1
value: 0.7384
- type: nauc_mrr_at_1_max
value: 43.9886
- type: nauc_mrr_at_1_std
value: 9.8923
- type: nauc_mrr_at_1_diff1
value: 50.394000000000005
- type: nauc_mrr_at_3_max
value: 43.5962
- type: nauc_mrr_at_3_std
value: 13.738
- type: nauc_mrr_at_3_diff1
value: 46.9918
- type: nauc_mrr_at_5_max
value: 43.6259
- type: nauc_mrr_at_5_std
value: 13.3696
- type: nauc_mrr_at_5_diff1
value: 46.7241
- type: nauc_mrr_at_10_max
value: 43.7969
- type: nauc_mrr_at_10_std
value: 13.477500000000001
- type: nauc_mrr_at_10_diff1
value: 47.125499999999995
- type: nauc_mrr_at_20_max
value: 43.8469
- type: nauc_mrr_at_20_std
value: 13.5156
- type: nauc_mrr_at_20_diff1
value: 47.088
- type: nauc_mrr_at_100_max
value: 43.8068
- type: nauc_mrr_at_100_std
value: 13.7051
- type: nauc_mrr_at_100_diff1
value: 47.153600000000004
- type: nauc_mrr_at_1000_max
value: 43.8016
- type: nauc_mrr_at_1000_std
value: 13.661999999999999
- type: nauc_mrr_at_1000_diff1
value: 47.1571
- type: main_score
value: 43.832
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: ndcg_at_1
value: 26.247999999999998
- type: ndcg_at_3
value: 31.799
- type: ndcg_at_5
value: 34.563
- type: ndcg_at_10
value: 36.889
- type: ndcg_at_20
value: 39.330999999999996
- type: ndcg_at_100
value: 42.426
- type: ndcg_at_1000
value: 44.745000000000005
- type: map_at_1
value: 24.067
- type: map_at_3
value: 29.492
- type: map_at_5
value: 31.11
- type: map_at_10
value: 32.184000000000005
- type: map_at_20
value: 32.903
- type: map_at_100
value: 33.357
- type: map_at_1000
value: 33.458
- type: recall_at_1
value: 24.067
- type: recall_at_3
value: 36.272
- type: recall_at_5
value: 42.77
- type: recall_at_10
value: 49.344
- type: recall_at_20
value: 58.46
- type: recall_at_100
value: 74.11999999999999
- type: recall_at_1000
value: 91.276
- type: precision_at_1
value: 26.247999999999998
- type: precision_at_3
value: 13.309000000000001
- type: precision_at_5
value: 9.649000000000001
- type: precision_at_10
value: 5.712
- type: precision_at_20
value: 3.466
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.123
- type: mrr_at_1
value: 26.247700000000002
- type: mrr_at_3
value: 31.638899999999996
- type: mrr_at_5
value: 33.1824
- type: mrr_at_10
value: 34.1493
- type: mrr_at_20
value: 34.7716
- type: mrr_at_100
value: 35.1893
- type: mrr_at_1000
value: 35.2507
- type: nauc_ndcg_at_1_max
value: 36.3215
- type: nauc_ndcg_at_1_std
value: 0.6172000000000001
- type: nauc_ndcg_at_1_diff1
value: 50.767799999999994
- type: nauc_ndcg_at_3_max
value: 32.5903
- type: nauc_ndcg_at_3_std
value: 2.5009
- type: nauc_ndcg_at_3_diff1
value: 44.7412
- type: nauc_ndcg_at_5_max
value: 32.616499999999995
- type: nauc_ndcg_at_5_std
value: 2.2826
- type: nauc_ndcg_at_5_diff1
value: 41.7193
- type: nauc_ndcg_at_10_max
value: 32.063399999999994
- type: nauc_ndcg_at_10_std
value: 2.7484
- type: nauc_ndcg_at_10_diff1
value: 40.9919
- type: nauc_ndcg_at_20_max
value: 32.6337
- type: nauc_ndcg_at_20_std
value: 3.6401000000000003
- type: nauc_ndcg_at_20_diff1
value: 39.4371
- type: nauc_ndcg_at_100_max
value: 33.4504
- type: nauc_ndcg_at_100_std
value: 6.5571
- type: nauc_ndcg_at_100_diff1
value: 40.103899999999996
- type: nauc_ndcg_at_1000_max
value: 33.413399999999996
- type: nauc_ndcg_at_1000_std
value: 6.1167
- type: nauc_ndcg_at_1000_diff1
value: 40.3296
- type: nauc_map_at_1_max
value: 33.9516
- type: nauc_map_at_1_std
value: -2.0814
- type: nauc_map_at_1_diff1
value: 51.6831
- type: nauc_map_at_3_max
value: 32.4114
- type: nauc_map_at_3_std
value: 0.9002
- type: nauc_map_at_3_diff1
value: 46.3164
- type: nauc_map_at_5_max
value: 32.7406
- type: nauc_map_at_5_std
value: 0.9598000000000001
- type: nauc_map_at_5_diff1
value: 44.576100000000004
- type: nauc_map_at_10_max
value: 32.669
- type: nauc_map_at_10_std
value: 1.4043
- type: nauc_map_at_10_diff1
value: 44.1697
- type: nauc_map_at_20_max
value: 32.807199999999995
- type: nauc_map_at_20_std
value: 1.7632999999999999
- type: nauc_map_at_20_diff1
value: 43.745400000000004
- type: nauc_map_at_100_max
value: 32.9749
- type: nauc_map_at_100_std
value: 2.1647
- type: nauc_map_at_100_diff1
value: 43.8445
- type: nauc_map_at_1000_max
value: 32.9631
- type: nauc_map_at_1000_std
value: 2.164
- type: nauc_map_at_1000_diff1
value: 43.8217
- type: nauc_recall_at_1_max
value: 33.9516
- type: nauc_recall_at_1_std
value: -2.0814
- type: nauc_recall_at_1_diff1
value: 51.6831
- type: nauc_recall_at_3_max
value: 30.248199999999997
- type: nauc_recall_at_3_std
value: 4.3766
- type: nauc_recall_at_3_diff1
value: 40.7147
- type: nauc_recall_at_5_max
value: 29.749799999999997
- type: nauc_recall_at_5_std
value: 3.739
- type: nauc_recall_at_5_diff1
value: 33.4515
- type: nauc_recall_at_10_max
value: 27.8039
- type: nauc_recall_at_10_std
value: 4.3235
- type: nauc_recall_at_10_diff1
value: 31.706200000000003
- type: nauc_recall_at_20_max
value: 29.4726
- type: nauc_recall_at_20_std
value: 7.2537
- type: nauc_recall_at_20_diff1
value: 24.763099999999998
- type: nauc_recall_at_100_max
value: 32.6767
- type: nauc_recall_at_100_std
value: 28.704400000000003
- type: nauc_recall_at_100_diff1
value: 23.6186
- type: nauc_recall_at_1000_max
value: 35.3748
- type: nauc_recall_at_1000_std
value: 49.2642
- type: nauc_recall_at_1000_diff1
value: 15.0664
- type: nauc_precision_at_1_max
value: 36.3215
- type: nauc_precision_at_1_std
value: 0.6172000000000001
- type: nauc_precision_at_1_diff1
value: 50.767799999999994
- type: nauc_precision_at_3_max
value: 32.4313
- type: nauc_precision_at_3_std
value: 6.8161
- type: nauc_precision_at_3_diff1
value: 39.4056
- type: nauc_precision_at_5_max
value: 32.1058
- type: nauc_precision_at_5_std
value: 7.5455
- type: nauc_precision_at_5_diff1
value: 29.119899999999998
- type: nauc_precision_at_10_max
value: 29.9078
- type: nauc_precision_at_10_std
value: 11.8851
- type: nauc_precision_at_10_diff1
value: 22.5166
- type: nauc_precision_at_20_max
value: 29.212300000000003
- type: nauc_precision_at_20_std
value: 16.1047
- type: nauc_precision_at_20_diff1
value: 12.209299999999999
- type: nauc_precision_at_100_max
value: 24.7982
- type: nauc_precision_at_100_std
value: 29.3162
- type: nauc_precision_at_100_diff1
value: 0.8240000000000001
- type: nauc_precision_at_1000_max
value: -0.8333
- type: nauc_precision_at_1000_std
value: 17.0877
- type: nauc_precision_at_1000_diff1
value: -25.4924
- type: nauc_mrr_at_1_max
value: 36.3215
- type: nauc_mrr_at_1_std
value: 0.6172000000000001
- type: nauc_mrr_at_1_diff1
value: 50.767799999999994
- type: nauc_mrr_at_3_max
value: 34.7464
- type: nauc_mrr_at_3_std
value: 2.9025
- type: nauc_mrr_at_3_diff1
value: 45.7566
- type: nauc_mrr_at_5_max
value: 34.454
- type: nauc_mrr_at_5_std
value: 2.9497
- type: nauc_mrr_at_5_diff1
value: 43.948
- type: nauc_mrr_at_10_max
value: 34.1548
- type: nauc_mrr_at_10_std
value: 3.0771
- type: nauc_mrr_at_10_diff1
value: 43.626599999999996
- type: nauc_mrr_at_20_max
value: 34.3061
- type: nauc_mrr_at_20_std
value: 3.2359999999999998
- type: nauc_mrr_at_20_diff1
value: 43.2516
- type: nauc_mrr_at_100_max
value: 34.3776
- type: nauc_mrr_at_100_std
value: 3.5534999999999997
- type: nauc_mrr_at_100_diff1
value: 43.432900000000004
- type: nauc_mrr_at_1000_max
value: 34.3807
- type: nauc_mrr_at_1000_std
value: 3.5423999999999998
- type: nauc_mrr_at_1000_diff1
value: 43.4448
- type: main_score
value: 36.889
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: ndcg_at_1
value: 29.837000000000003
- type: ndcg_at_3
value: 25.392
- type: ndcg_at_5
value: 27.153
- type: ndcg_at_10
value: 30.263
- type: ndcg_at_20
value: 33.073
- type: ndcg_at_100
value: 37.228
- type: ndcg_at_1000
value: 40.677
- type: map_at_1
value: 13.189
- type: map_at_3
value: 18.512999999999998
- type: map_at_5
value: 20.212
- type: map_at_10
value: 21.789
- type: map_at_20
value: 22.787
- type: map_at_100
value: 23.580000000000002
- type: map_at_1000
value: 23.772
- type: recall_at_1
value: 13.189
- type: recall_at_3
value: 23.255
- type: recall_at_5
value: 28.445999999999998
- type: recall_at_10
value: 35.355
- type: recall_at_20
value: 43.187999999999995
- type: recall_at_100
value: 59.255
- type: recall_at_1000
value: 78.637
- type: precision_at_1
value: 29.837000000000003
- type: precision_at_3
value: 18.545
- type: precision_at_5
value: 14.241000000000001
- type: precision_at_10
value: 9.179
- type: precision_at_20
value: 5.808
- type: precision_at_100
value: 1.659
- type: precision_at_1000
value: 0.22999999999999998
- type: mrr_at_1
value: 29.8371
- type: mrr_at_3
value: 38.2845
- type: mrr_at_5
value: 40.300799999999995
- type: mrr_at_10
value: 41.3765
- type: mrr_at_20
value: 41.958400000000005
- type: mrr_at_100
value: 42.281600000000005
- type: mrr_at_1000
value: 42.3193
- type: nauc_ndcg_at_1_max
value: 29.676000000000002
- type: nauc_ndcg_at_1_std
value: 20.4771
- type: nauc_ndcg_at_1_diff1
value: 22.0866
- type: nauc_ndcg_at_3_max
value: 34.3256
- type: nauc_ndcg_at_3_std
value: 18.886400000000002
- type: nauc_ndcg_at_3_diff1
value: 19.692999999999998
- type: nauc_ndcg_at_5_max
value: 36.709599999999995
- type: nauc_ndcg_at_5_std
value: 21.857
- type: nauc_ndcg_at_5_diff1
value: 20.2605
- type: nauc_ndcg_at_10_max
value: 36.951699999999995
- type: nauc_ndcg_at_10_std
value: 24.1201
- type: nauc_ndcg_at_10_diff1
value: 19.5268
- type: nauc_ndcg_at_20_max
value: 37.2598
- type: nauc_ndcg_at_20_std
value: 26.072699999999998
- type: nauc_ndcg_at_20_diff1
value: 18.5947
- type: nauc_ndcg_at_100_max
value: 37.5131
- type: nauc_ndcg_at_100_std
value: 27.3519
- type: nauc_ndcg_at_100_diff1
value: 18.7028
- type: nauc_ndcg_at_1000_max
value: 37.4262
- type: nauc_ndcg_at_1000_std
value: 27.158700000000003
- type: nauc_ndcg_at_1000_diff1
value: 19.2395
- type: nauc_map_at_1_max
value: 32.2132
- type: nauc_map_at_1_std
value: 15.244
- type: nauc_map_at_1_diff1
value: 26.2965
- type: nauc_map_at_3_max
value: 35.157
- type: nauc_map_at_3_std
value: 16.8008
- type: nauc_map_at_3_diff1
value: 21.7011
- type: nauc_map_at_5_max
value: 36.0907
- type: nauc_map_at_5_std
value: 19.0433
- type: nauc_map_at_5_diff1
value: 21.5595
- type: nauc_map_at_10_max
value: 36.1498
- type: nauc_map_at_10_std
value: 20.7259
- type: nauc_map_at_10_diff1
value: 20.816599999999998
- type: nauc_map_at_20_max
value: 36.365199999999994
- type: nauc_map_at_20_std
value: 21.6367
- type: nauc_map_at_20_diff1
value: 20.4563
- type: nauc_map_at_100_max
value: 36.503600000000006
- type: nauc_map_at_100_std
value: 22.020200000000003
- type: nauc_map_at_100_diff1
value: 20.5135
- type: nauc_map_at_1000_max
value: 36.4843
- type: nauc_map_at_1000_std
value: 22.0155
- type: nauc_map_at_1000_diff1
value: 20.5659
- type: nauc_recall_at_1_max
value: 32.2132
- type: nauc_recall_at_1_std
value: 15.244
- type: nauc_recall_at_1_diff1
value: 26.2965
- type: nauc_recall_at_3_max
value: 34.6294
- type: nauc_recall_at_3_std
value: 16.517200000000003
- type: nauc_recall_at_3_diff1
value: 16.6413
- type: nauc_recall_at_5_max
value: 35.938700000000004
- type: nauc_recall_at_5_std
value: 21.1943
- type: nauc_recall_at_5_diff1
value: 16.702
- type: nauc_recall_at_10_max
value: 34.956900000000005
- type: nauc_recall_at_10_std
value: 24.6739
- type: nauc_recall_at_10_diff1
value: 14.4465
- type: nauc_recall_at_20_max
value: 33.873799999999996
- type: nauc_recall_at_20_std
value: 27.9903
- type: nauc_recall_at_20_diff1
value: 11.1114
- type: nauc_recall_at_100_max
value: 33.123799999999996
- type: nauc_recall_at_100_std
value: 31.4933
- type: nauc_recall_at_100_diff1
value: 10.3246
- type: nauc_recall_at_1000_max
value: 32.9304
- type: nauc_recall_at_1000_std
value: 33.5144
- type: nauc_recall_at_1000_diff1
value: 10.810699999999999
- type: nauc_precision_at_1_max
value: 29.676000000000002
- type: nauc_precision_at_1_std
value: 20.4771
- type: nauc_precision_at_1_diff1
value: 22.0866
- type: nauc_precision_at_3_max
value: 32.0765
- type: nauc_precision_at_3_std
value: 20.6039
- type: nauc_precision_at_3_diff1
value: 13.585700000000001
- type: nauc_precision_at_5_max
value: 33.5445
- type: nauc_precision_at_5_std
value: 26.567400000000003
- type: nauc_precision_at_5_diff1
value: 14.421700000000001
- type: nauc_precision_at_10_max
value: 29.520200000000003
- type: nauc_precision_at_10_std
value: 28.8453
- type: nauc_precision_at_10_diff1
value: 11.2529
- type: nauc_precision_at_20_max
value: 25.610300000000002
- type: nauc_precision_at_20_std
value: 30.6799
- type: nauc_precision_at_20_diff1
value: 6.8877
- type: nauc_precision_at_100_max
value: 18.3639
- type: nauc_precision_at_100_std
value: 28.2568
- type: nauc_precision_at_100_diff1
value: 3.8568
- type: nauc_precision_at_1000_max
value: 6.9706
- type: nauc_precision_at_1000_std
value: 18.9339
- type: nauc_precision_at_1000_diff1
value: 0.6999
- type: nauc_mrr_at_1_max
value: 29.676000000000002
- type: nauc_mrr_at_1_std
value: 20.4771
- type: nauc_mrr_at_1_diff1
value: 22.0866
- type: nauc_mrr_at_3_max
value: 32.559900000000006
- type: nauc_mrr_at_3_std
value: 22.1817
- type: nauc_mrr_at_3_diff1
value: 19.1362
- type: nauc_mrr_at_5_max
value: 33.692299999999996
- type: nauc_mrr_at_5_std
value: 23.5179
- type: nauc_mrr_at_5_diff1
value: 19.9908
- type: nauc_mrr_at_10_max
value: 33.6748
- type: nauc_mrr_at_10_std
value: 23.624200000000002
- type: nauc_mrr_at_10_diff1
value: 19.969
- type: nauc_mrr_at_20_max
value: 33.562599999999996
- type: nauc_mrr_at_20_std
value: 23.776
- type: nauc_mrr_at_20_diff1
value: 19.8259
- type: nauc_mrr_at_100_max
value: 33.4998
- type: nauc_mrr_at_100_std
value: 23.7432
- type: nauc_mrr_at_100_diff1
value: 19.8137
- type: nauc_mrr_at_1000_max
value: 33.4876
- type: nauc_mrr_at_1000_std
value: 23.719199999999997
- type: nauc_mrr_at_1000_diff1
value: 19.817
- type: main_score
value: 30.263
- task:
type: Retrieval
dataset:
name: MTEB CodeFeedbackMT (default)
type: CoIR-Retrieval/codefeedback-mt
config: default
split: test
revision: b0f12fa0c0dd67f59c95a5c33d02aeeb4c398c5f
metrics:
- type: ndcg_at_1
value: 27.002
- type: ndcg_at_3
value: 33.597
- type: ndcg_at_5
value: 35.75
- type: ndcg_at_10
value: 37.757000000000005
- type: ndcg_at_20
value: 39.36
- type: ndcg_at_100
value: 41.806
- type: ndcg_at_1000
value: 43.675000000000004
- type: map_at_1
value: 27.002
- type: map_at_3
value: 31.964
- type: map_at_5
value: 33.158
- type: map_at_10
value: 33.988
- type: map_at_20
value: 34.43
- type: map_at_100
value: 34.760000000000005
- type: map_at_1000
value: 34.821999999999996
- type: recall_at_1
value: 27.002
- type: recall_at_3
value: 38.329
- type: recall_at_5
value: 43.557
- type: recall_at_10
value: 49.755
- type: recall_at_20
value: 56.082
- type: recall_at_100
value: 69.376
- type: recall_at_1000
value: 84.56
- type: precision_at_1
value: 27.002
- type: precision_at_3
value: 12.776000000000002
- type: precision_at_5
value: 8.711
- type: precision_at_10
value: 4.976
- type: precision_at_20
value: 2.804
- type: precision_at_100
value: 0.694
- type: precision_at_1000
value: 0.08499999999999999
- type: mrr_at_1
value: 27.001599999999996
- type: mrr_at_3
value: 31.9638
- type: mrr_at_5
value: 33.158300000000004
- type: mrr_at_10
value: 33.9877
- type: mrr_at_20
value: 34.429700000000004
- type: mrr_at_100
value: 34.760200000000005
- type: mrr_at_1000
value: 34.822399999999995
- type: nauc_ndcg_at_1_max
value: 14.691199999999998
- type: nauc_ndcg_at_1_std
value: -18.2481
- type: nauc_ndcg_at_1_diff1
value: 51.82940000000001
- type: nauc_ndcg_at_3_max
value: 15.9155
- type: nauc_ndcg_at_3_std
value: -18.21
- type: nauc_ndcg_at_3_diff1
value: 46.4667
- type: nauc_ndcg_at_5_max
value: 16.2958
- type: nauc_ndcg_at_5_std
value: -17.8939
- type: nauc_ndcg_at_5_diff1
value: 45.4591
- type: nauc_ndcg_at_10_max
value: 16.6542
- type: nauc_ndcg_at_10_std
value: -17.121
- type: nauc_ndcg_at_10_diff1
value: 44.5803
- type: nauc_ndcg_at_20_max
value: 17.210800000000003
- type: nauc_ndcg_at_20_std
value: -16.3918
- type: nauc_ndcg_at_20_diff1
value: 44.0927
- type: nauc_ndcg_at_100_max
value: 17.8597
- type: nauc_ndcg_at_100_std
value: -14.35
- type: nauc_ndcg_at_100_diff1
value: 43.561
- type: nauc_ndcg_at_1000_max
value: 18.0753
- type: nauc_ndcg_at_1000_std
value: -13.827300000000001
- type: nauc_ndcg_at_1000_diff1
value: 43.9433
- type: nauc_map_at_1_max
value: 14.691199999999998
- type: nauc_map_at_1_std
value: -18.2481
- type: nauc_map_at_1_diff1
value: 51.82940000000001
- type: nauc_map_at_3_max
value: 15.657099999999998
- type: nauc_map_at_3_std
value: -18.253700000000002
- type: nauc_map_at_3_diff1
value: 47.749399999999994
- type: nauc_map_at_5_max
value: 15.8683
- type: nauc_map_at_5_std
value: -18.0718
- type: nauc_map_at_5_diff1
value: 47.176899999999996
- type: nauc_map_at_10_max
value: 16.0118
- type: nauc_map_at_10_std
value: -17.7494
- type: nauc_map_at_10_diff1
value: 46.818799999999996
- type: nauc_map_at_20_max
value: 16.1658
- type: nauc_map_at_20_std
value: -17.552400000000002
- type: nauc_map_at_20_diff1
value: 46.694
- type: nauc_map_at_100_max
value: 16.2407
- type: nauc_map_at_100_std
value: -17.289099999999998
- type: nauc_map_at_100_diff1
value: 46.6325
- type: nauc_map_at_1000_max
value: 16.2491
- type: nauc_map_at_1000_std
value: -17.2655
- type: nauc_map_at_1000_diff1
value: 46.646300000000004
- type: nauc_recall_at_1_max
value: 14.691199999999998
- type: nauc_recall_at_1_std
value: -18.2481
- type: nauc_recall_at_1_diff1
value: 51.82940000000001
- type: nauc_recall_at_3_max
value: 16.6167
- type: nauc_recall_at_3_std
value: -18.0762
- type: nauc_recall_at_3_diff1
value: 42.9204
- type: nauc_recall_at_5_max
value: 17.522299999999998
- type: nauc_recall_at_5_std
value: -17.349899999999998
- type: nauc_recall_at_5_diff1
value: 40.5682
- type: nauc_recall_at_10_max
value: 18.6573
- type: nauc_recall_at_10_std
value: -14.9976
- type: nauc_recall_at_10_diff1
value: 37.7799
- type: nauc_recall_at_20_max
value: 21.0226
- type: nauc_recall_at_20_std
value: -11.8854
- type: nauc_recall_at_20_diff1
value: 35.3475
- type: nauc_recall_at_100_max
value: 26.442300000000003
- type: nauc_recall_at_100_std
value: 2.9998
- type: nauc_recall_at_100_diff1
value: 29.618699999999997
- type: nauc_recall_at_1000_max
value: 36.3607
- type: nauc_recall_at_1000_std
value: 24.0336
- type: nauc_recall_at_1000_diff1
value: 25.6114
- type: nauc_precision_at_1_max
value: 14.691199999999998
- type: nauc_precision_at_1_std
value: -18.2481
- type: nauc_precision_at_1_diff1
value: 51.82940000000001
- type: nauc_precision_at_3_max
value: 16.6167
- type: nauc_precision_at_3_std
value: -18.0762
- type: nauc_precision_at_3_diff1
value: 42.9204
- type: nauc_precision_at_5_max
value: 17.522299999999998
- type: nauc_precision_at_5_std
value: -17.349899999999998
- type: nauc_precision_at_5_diff1
value: 40.5682
- type: nauc_precision_at_10_max
value: 18.6573
- type: nauc_precision_at_10_std
value: -14.9976
- type: nauc_precision_at_10_diff1
value: 37.7799
- type: nauc_precision_at_20_max
value: 21.0226
- type: nauc_precision_at_20_std
value: -11.8854
- type: nauc_precision_at_20_diff1
value: 35.3475
- type: nauc_precision_at_100_max
value: 26.442300000000003
- type: nauc_precision_at_100_std
value: 2.9998
- type: nauc_precision_at_100_diff1
value: 29.618699999999997
- type: nauc_precision_at_1000_max
value: 36.3607
- type: nauc_precision_at_1000_std
value: 24.0336
- type: nauc_precision_at_1000_diff1
value: 25.6114
- type: nauc_mrr_at_1_max
value: 14.691199999999998
- type: nauc_mrr_at_1_std
value: -18.2481
- type: nauc_mrr_at_1_diff1
value: 51.82940000000001
- type: nauc_mrr_at_3_max
value: 15.657099999999998
- type: nauc_mrr_at_3_std
value: -18.253700000000002
- type: nauc_mrr_at_3_diff1
value: 47.749399999999994
- type: nauc_mrr_at_5_max
value: 15.8683
- type: nauc_mrr_at_5_std
value: -18.0718
- type: nauc_mrr_at_5_diff1
value: 47.176899999999996
- type: nauc_mrr_at_10_max
value: 16.0118
- type: nauc_mrr_at_10_std
value: -17.7494
- type: nauc_mrr_at_10_diff1
value: 46.818799999999996
- type: nauc_mrr_at_20_max
value: 16.1658
- type: nauc_mrr_at_20_std
value: -17.552400000000002
- type: nauc_mrr_at_20_diff1
value: 46.694
- type: nauc_mrr_at_100_max
value: 16.2407
- type: nauc_mrr_at_100_std
value: -17.289099999999998
- type: nauc_mrr_at_100_diff1
value: 46.6325
- type: nauc_mrr_at_1000_max
value: 16.2491
- type: nauc_mrr_at_1000_std
value: -17.2655
- type: nauc_mrr_at_1000_diff1
value: 46.646300000000004
- type: main_score
value: 37.757000000000005
- task:
type: Retrieval
dataset:
name: MTEB CodeFeedbackST (default)
type: CoIR-Retrieval/codefeedback-st
config: default
split: test
revision: d213819e87aab9010628da8b73ab4eb337c89340
metrics:
- type: ndcg_at_1
value: 53.335
- type: ndcg_at_3
value: 64.78399999999999
- type: ndcg_at_5
value: 67.418
- type: ndcg_at_10
value: 69.425
- type: ndcg_at_20
value: 70.513
- type: ndcg_at_100
value: 71.709
- type: ndcg_at_1000
value: 72.139
- type: map_at_1
value: 53.335
- type: map_at_3
value: 62.0
- type: map_at_5
value: 63.467
- type: map_at_10
value: 64.306
- type: map_at_20
value: 64.608
- type: map_at_100
value: 64.776
- type: map_at_1000
value: 64.793
- type: recall_at_1
value: 53.335
- type: recall_at_3
value: 72.82600000000001
- type: recall_at_5
value: 79.199
- type: recall_at_10
value: 85.354
- type: recall_at_20
value: 89.628
- type: recall_at_100
value: 96.039
- type: recall_at_1000
value: 99.368
- type: precision_at_1
value: 53.335
- type: precision_at_3
value: 24.275
- type: precision_at_5
value: 15.840000000000002
- type: precision_at_10
value: 8.535
- type: precision_at_20
value: 4.481
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.099
- type: mrr_at_1
value: 53.31249999999999
- type: mrr_at_3
value: 62.0217
- type: mrr_at_5
value: 63.489700000000006
- type: mrr_at_10
value: 64.3214
- type: mrr_at_20
value: 64.6232
- type: mrr_at_100
value: 64.7915
- type: mrr_at_1000
value: 64.8086
- type: nauc_ndcg_at_1_max
value: 4.5411
- type: nauc_ndcg_at_1_std
value: -27.4357
- type: nauc_ndcg_at_1_diff1
value: 70.331
- type: nauc_ndcg_at_3_max
value: 9.293899999999999
- type: nauc_ndcg_at_3_std
value: -30.4201
- type: nauc_ndcg_at_3_diff1
value: 64.90599999999999
- type: nauc_ndcg_at_5_max
value: 9.725
- type: nauc_ndcg_at_5_std
value: -30.8448
- type: nauc_ndcg_at_5_diff1
value: 64.2796
- type: nauc_ndcg_at_10_max
value: 9.4302
- type: nauc_ndcg_at_10_std
value: -30.5425
- type: nauc_ndcg_at_10_diff1
value: 64.5211
- type: nauc_ndcg_at_20_max
value: 9.019
- type: nauc_ndcg_at_20_std
value: -29.986800000000002
- type: nauc_ndcg_at_20_diff1
value: 64.7995
- type: nauc_ndcg_at_100_max
value: 8.780100000000001
- type: nauc_ndcg_at_100_std
value: -29.4587
- type: nauc_ndcg_at_100_diff1
value: 65.3485
- type: nauc_ndcg_at_1000_max
value: 8.5933
- type: nauc_ndcg_at_1000_std
value: -29.462300000000003
- type: nauc_ndcg_at_1000_diff1
value: 65.5513
- type: nauc_map_at_1_max
value: 4.5411
- type: nauc_map_at_1_std
value: -27.4357
- type: nauc_map_at_1_diff1
value: 70.331
- type: nauc_map_at_3_max
value: 7.9982
- type: nauc_map_at_3_std
value: -29.5826
- type: nauc_map_at_3_diff1
value: 66.2961
- type: nauc_map_at_5_max
value: 8.1756
- type: nauc_map_at_5_std
value: -29.765900000000002
- type: nauc_map_at_5_diff1
value: 66.0248
- type: nauc_map_at_10_max
value: 8.0296
- type: nauc_map_at_10_std
value: -29.6458
- type: nauc_map_at_10_diff1
value: 66.158
- type: nauc_map_at_20_max
value: 7.919099999999999
- type: nauc_map_at_20_std
value: -29.505799999999997
- type: nauc_map_at_20_diff1
value: 66.24029999999999
- type: nauc_map_at_100_max
value: 7.8803
- type: nauc_map_at_100_std
value: -29.442600000000002
- type: nauc_map_at_100_diff1
value: 66.3125
- type: nauc_map_at_1000_max
value: 7.8752
- type: nauc_map_at_1000_std
value: -29.438399999999998
- type: nauc_map_at_1000_diff1
value: 66.3195
- type: nauc_recall_at_1_max
value: 4.5411
- type: nauc_recall_at_1_std
value: -27.4357
- type: nauc_recall_at_1_diff1
value: 70.331
- type: nauc_recall_at_3_max
value: 13.911000000000001
- type: nauc_recall_at_3_std
value: -33.4167
- type: nauc_recall_at_3_diff1
value: 59.9986
- type: nauc_recall_at_5_max
value: 16.401
- type: nauc_recall_at_5_std
value: -35.5473
- type: nauc_recall_at_5_diff1
value: 56.781000000000006
- type: nauc_recall_at_10_max
value: 17.2917
- type: nauc_recall_at_10_std
value: -35.4908
- type: nauc_recall_at_10_diff1
value: 55.279199999999996
- type: nauc_recall_at_20_max
value: 16.4243
- type: nauc_recall_at_20_std
value: -32.2776
- type: nauc_recall_at_20_diff1
value: 54.4386
- type: nauc_recall_at_100_max
value: 21.5949
- type: nauc_recall_at_100_std
value: -19.9444
- type: nauc_recall_at_100_diff1
value: 54.3502
- type: nauc_recall_at_1000_max
value: 35.8557
- type: nauc_recall_at_1000_std
value: 18.242
- type: nauc_recall_at_1000_diff1
value: 50.969699999999996
- type: nauc_precision_at_1_max
value: 4.5411
- type: nauc_precision_at_1_std
value: -27.4357
- type: nauc_precision_at_1_diff1
value: 70.331
- type: nauc_precision_at_3_max
value: 13.911000000000001
- type: nauc_precision_at_3_std
value: -33.4167
- type: nauc_precision_at_3_diff1
value: 59.9986
- type: nauc_precision_at_5_max
value: 16.401
- type: nauc_precision_at_5_std
value: -35.5473
- type: nauc_precision_at_5_diff1
value: 56.781000000000006
- type: nauc_precision_at_10_max
value: 17.2917
- type: nauc_precision_at_10_std
value: -35.4908
- type: nauc_precision_at_10_diff1
value: 55.279199999999996
- type: nauc_precision_at_20_max
value: 16.4243
- type: nauc_precision_at_20_std
value: -32.2776
- type: nauc_precision_at_20_diff1
value: 54.4386
- type: nauc_precision_at_100_max
value: 21.5949
- type: nauc_precision_at_100_std
value: -19.9444
- type: nauc_precision_at_100_diff1
value: 54.3502
- type: nauc_precision_at_1000_max
value: 35.8557
- type: nauc_precision_at_1000_std
value: 18.242
- type: nauc_precision_at_1000_diff1
value: 50.969699999999996
- type: nauc_mrr_at_1_max
value: 4.045
- type: nauc_mrr_at_1_std
value: -27.371299999999998
- type: nauc_mrr_at_1_diff1
value: 70.3681
- type: nauc_mrr_at_3_max
value: 7.7906
- type: nauc_mrr_at_3_std
value: -29.488999999999997
- type: nauc_mrr_at_3_diff1
value: 66.2574
- type: nauc_mrr_at_5_max
value: 7.8858999999999995
- type: nauc_mrr_at_5_std
value: -29.7336
- type: nauc_mrr_at_5_diff1
value: 66.0274
- type: nauc_mrr_at_10_max
value: 7.7456
- type: nauc_mrr_at_10_std
value: -29.5912
- type: nauc_mrr_at_10_diff1
value: 66.1546
- type: nauc_mrr_at_20_max
value: 7.6305
- type: nauc_mrr_at_20_std
value: -29.4551
- type: nauc_mrr_at_20_diff1
value: 66.2342
- type: nauc_mrr_at_100_max
value: 7.589799999999999
- type: nauc_mrr_at_100_std
value: -29.392400000000002
- type: nauc_mrr_at_100_diff1
value: 66.3072
- type: nauc_mrr_at_1000_max
value: 7.584499999999999
- type: nauc_mrr_at_1000_std
value: -29.3881
- type: nauc_mrr_at_1000_diff1
value: 66.3142
- type: main_score
value: 69.425
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (python)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: python
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 39.395
- type: ndcg_at_3
value: 49.038
- type: ndcg_at_5
value: 51.398999999999994
- type: ndcg_at_10
value: 53.593999999999994
- type: ndcg_at_20
value: 55.013
- type: ndcg_at_100
value: 56.940999999999995
- type: ndcg_at_1000
value: 58.126999999999995
- type: map_at_1
value: 39.395
- type: map_at_3
value: 46.687
- type: map_at_5
value: 48.003
- type: map_at_10
value: 48.911
- type: map_at_20
value: 49.305
- type: map_at_100
value: 49.571
- type: map_at_1000
value: 49.612
- type: recall_at_1
value: 39.395
- type: recall_at_3
value: 55.832
- type: recall_at_5
value: 61.543000000000006
- type: recall_at_10
value: 68.313
- type: recall_at_20
value: 73.897
- type: recall_at_100
value: 84.308
- type: recall_at_1000
value: 93.866
- type: precision_at_1
value: 39.395
- type: precision_at_3
value: 18.611
- type: precision_at_5
value: 12.309000000000001
- type: precision_at_10
value: 6.8309999999999995
- type: precision_at_20
value: 3.695
- type: precision_at_100
value: 0.843
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 39.402100000000004
- type: mrr_at_3
value: 46.690799999999996
- type: mrr_at_5
value: 48.0073
- type: mrr_at_10
value: 48.9156
- type: mrr_at_20
value: 49.3097
- type: mrr_at_100
value: 49.5752
- type: mrr_at_1000
value: 49.6159
- type: nauc_ndcg_at_1_max
value: 29.945899999999998
- type: nauc_ndcg_at_1_std
value: -7.957
- type: nauc_ndcg_at_1_diff1
value: 55.8451
- type: nauc_ndcg_at_3_max
value: 31.5415
- type: nauc_ndcg_at_3_std
value: -8.2198
- type: nauc_ndcg_at_3_diff1
value: 51.75959999999999
- type: nauc_ndcg_at_5_max
value: 31.6664
- type: nauc_ndcg_at_5_std
value: -7.1463
- type: nauc_ndcg_at_5_diff1
value: 51.0188
- type: nauc_ndcg_at_10_max
value: 31.616
- type: nauc_ndcg_at_10_std
value: -6.575699999999999
- type: nauc_ndcg_at_10_diff1
value: 50.7344
- type: nauc_ndcg_at_20_max
value: 31.626199999999997
- type: nauc_ndcg_at_20_std
value: -6.0725
- type: nauc_ndcg_at_20_diff1
value: 50.77159999999999
- type: nauc_ndcg_at_100_max
value: 31.6639
- type: nauc_ndcg_at_100_std
value: -5.4948999999999995
- type: nauc_ndcg_at_100_diff1
value: 50.790800000000004
- type: nauc_ndcg_at_1000_max
value: 31.5161
- type: nauc_ndcg_at_1000_std
value: -5.748600000000001
- type: nauc_ndcg_at_1000_diff1
value: 51.062799999999996
- type: nauc_map_at_1_max
value: 29.945899999999998
- type: nauc_map_at_1_std
value: -7.957
- type: nauc_map_at_1_diff1
value: 55.8451
- type: nauc_map_at_3_max
value: 31.1851
- type: nauc_map_at_3_std
value: -8.1706
- type: nauc_map_at_3_diff1
value: 52.7057
- type: nauc_map_at_5_max
value: 31.2519
- type: nauc_map_at_5_std
value: -7.580299999999999
- type: nauc_map_at_5_diff1
value: 52.3165
- type: nauc_map_at_10_max
value: 31.231399999999997
- type: nauc_map_at_10_std
value: -7.360800000000001
- type: nauc_map_at_10_diff1
value: 52.23
- type: nauc_map_at_20_max
value: 31.2307
- type: nauc_map_at_20_std
value: -7.2384
- type: nauc_map_at_20_diff1
value: 52.2532
- type: nauc_map_at_100_max
value: 31.2368
- type: nauc_map_at_100_std
value: -7.1598
- type: nauc_map_at_100_diff1
value: 52.260600000000004
- type: nauc_map_at_1000_max
value: 31.230900000000002
- type: nauc_map_at_1000_std
value: -7.1662
- type: nauc_map_at_1000_diff1
value: 52.267300000000006
- type: nauc_recall_at_1_max
value: 29.945899999999998
- type: nauc_recall_at_1_std
value: -7.957
- type: nauc_recall_at_1_diff1
value: 55.8451
- type: nauc_recall_at_3_max
value: 32.6121
- type: nauc_recall_at_3_std
value: -8.363
- type: nauc_recall_at_3_diff1
value: 48.9016
- type: nauc_recall_at_5_max
value: 33.0025
- type: nauc_recall_at_5_std
value: -5.5725
- type: nauc_recall_at_5_diff1
value: 46.7352
- type: nauc_recall_at_10_max
value: 32.9683
- type: nauc_recall_at_10_std
value: -3.2460999999999998
- type: nauc_recall_at_10_diff1
value: 45.0443
- type: nauc_recall_at_20_max
value: 33.2455
- type: nauc_recall_at_20_std
value: -0.0093
- type: nauc_recall_at_20_diff1
value: 44.294200000000004
- type: nauc_recall_at_100_max
value: 34.4004
- type: nauc_recall_at_100_std
value: 8.996500000000001
- type: nauc_recall_at_100_diff1
value: 41.0779
- type: nauc_recall_at_1000_max
value: 33.096399999999996
- type: nauc_recall_at_1000_std
value: 19.266
- type: nauc_recall_at_1000_diff1
value: 38.2966
- type: nauc_precision_at_1_max
value: 29.945899999999998
- type: nauc_precision_at_1_std
value: -7.957
- type: nauc_precision_at_1_diff1
value: 55.8451
- type: nauc_precision_at_3_max
value: 32.6121
- type: nauc_precision_at_3_std
value: -8.363
- type: nauc_precision_at_3_diff1
value: 48.9016
- type: nauc_precision_at_5_max
value: 33.0025
- type: nauc_precision_at_5_std
value: -5.5725
- type: nauc_precision_at_5_diff1
value: 46.7352
- type: nauc_precision_at_10_max
value: 32.9683
- type: nauc_precision_at_10_std
value: -3.2460999999999998
- type: nauc_precision_at_10_diff1
value: 45.0443
- type: nauc_precision_at_20_max
value: 33.2455
- type: nauc_precision_at_20_std
value: -0.0093
- type: nauc_precision_at_20_diff1
value: 44.294200000000004
- type: nauc_precision_at_100_max
value: 34.4004
- type: nauc_precision_at_100_std
value: 8.996500000000001
- type: nauc_precision_at_100_diff1
value: 41.0779
- type: nauc_precision_at_1000_max
value: 33.096399999999996
- type: nauc_precision_at_1000_std
value: 19.266
- type: nauc_precision_at_1000_diff1
value: 38.2966
- type: nauc_mrr_at_1_max
value: 29.9427
- type: nauc_mrr_at_1_std
value: -7.9670000000000005
- type: nauc_mrr_at_1_diff1
value: 55.824799999999996
- type: nauc_mrr_at_3_max
value: 31.1834
- type: nauc_mrr_at_3_std
value: -8.175799999999999
- type: nauc_mrr_at_3_diff1
value: 52.6952
- type: nauc_mrr_at_5_max
value: 31.2515
- type: nauc_mrr_at_5_std
value: -7.5835
- type: nauc_mrr_at_5_diff1
value: 52.303599999999996
- type: nauc_mrr_at_10_max
value: 31.2284
- type: nauc_mrr_at_10_std
value: -7.3647
- type: nauc_mrr_at_10_diff1
value: 52.2177
- type: nauc_mrr_at_20_max
value: 31.2274
- type: nauc_mrr_at_20_std
value: -7.243399999999999
- type: nauc_mrr_at_20_diff1
value: 52.2417
- type: nauc_mrr_at_100_max
value: 31.2336
- type: nauc_mrr_at_100_std
value: -7.1640999999999995
- type: nauc_mrr_at_100_diff1
value: 52.2482
- type: nauc_mrr_at_1000_max
value: 31.227700000000002
- type: nauc_mrr_at_1000_std
value: -7.1705000000000005
- type: nauc_mrr_at_1000_diff1
value: 52.254900000000006
- type: main_score
value: 53.593999999999994
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (javascript)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: javascript
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 39.593
- type: ndcg_at_3
value: 48.759
- type: ndcg_at_5
value: 51.073
- type: ndcg_at_10
value: 53.1
- type: ndcg_at_20
value: 54.230999999999995
- type: ndcg_at_100
value: 56.289
- type: ndcg_at_1000
value: 57.67400000000001
- type: map_at_1
value: 39.593
- type: map_at_3
value: 46.536
- type: map_at_5
value: 47.826
- type: map_at_10
value: 48.676
- type: map_at_20
value: 48.983
- type: map_at_100
value: 49.268
- type: map_at_1000
value: 49.313
- type: recall_at_1
value: 39.593
- type: recall_at_3
value: 55.181000000000004
- type: recall_at_5
value: 60.772000000000006
- type: recall_at_10
value: 66.971
- type: recall_at_20
value: 71.468
- type: recall_at_100
value: 82.55799999999999
- type: recall_at_1000
value: 93.83200000000001
- type: precision_at_1
value: 39.593
- type: precision_at_3
value: 18.394
- type: precision_at_5
value: 12.154
- type: precision_at_10
value: 6.697
- type: precision_at_20
value: 3.573
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.094
- type: mrr_at_1
value: 39.5624
- type: mrr_at_3
value: 46.5158
- type: mrr_at_5
value: 47.8056
- type: mrr_at_10
value: 48.654799999999994
- type: mrr_at_20
value: 48.9616
- type: mrr_at_100
value: 49.2469
- type: mrr_at_1000
value: 49.2923
- type: nauc_ndcg_at_1_max
value: 26.582099999999997
- type: nauc_ndcg_at_1_std
value: -14.751900000000001
- type: nauc_ndcg_at_1_diff1
value: 54.9795
- type: nauc_ndcg_at_3_max
value: 30.000700000000002
- type: nauc_ndcg_at_3_std
value: -13.107299999999999
- type: nauc_ndcg_at_3_diff1
value: 51.7972
- type: nauc_ndcg_at_5_max
value: 29.4468
- type: nauc_ndcg_at_5_std
value: -13.3189
- type: nauc_ndcg_at_5_diff1
value: 51.0062
- type: nauc_ndcg_at_10_max
value: 28.6629
- type: nauc_ndcg_at_10_std
value: -13.900000000000002
- type: nauc_ndcg_at_10_diff1
value: 50.4771
- type: nauc_ndcg_at_20_max
value: 28.558600000000002
- type: nauc_ndcg_at_20_std
value: -13.793
- type: nauc_ndcg_at_20_diff1
value: 50.720299999999995
- type: nauc_ndcg_at_100_max
value: 28.7124
- type: nauc_ndcg_at_100_std
value: -13.133000000000001
- type: nauc_ndcg_at_100_diff1
value: 50.7983
- type: nauc_ndcg_at_1000_max
value: 28.4906
- type: nauc_ndcg_at_1000_std
value: -13.5678
- type: nauc_ndcg_at_1000_diff1
value: 51.1172
- type: nauc_map_at_1_max
value: 26.582099999999997
- type: nauc_map_at_1_std
value: -14.751900000000001
- type: nauc_map_at_1_diff1
value: 54.9795
- type: nauc_map_at_3_max
value: 29.191899999999997
- type: nauc_map_at_3_std
value: -13.565299999999999
- type: nauc_map_at_3_diff1
value: 52.5372
- type: nauc_map_at_5_max
value: 28.865099999999998
- type: nauc_map_at_5_std
value: -13.6911
- type: nauc_map_at_5_diff1
value: 52.12520000000001
- type: nauc_map_at_10_max
value: 28.5526
- type: nauc_map_at_10_std
value: -13.9255
- type: nauc_map_at_10_diff1
value: 51.931400000000004
- type: nauc_map_at_20_max
value: 28.520200000000003
- type: nauc_map_at_20_std
value: -13.8934
- type: nauc_map_at_20_diff1
value: 51.991299999999995
- type: nauc_map_at_100_max
value: 28.5184
- type: nauc_map_at_100_std
value: -13.8399
- type: nauc_map_at_100_diff1
value: 52.0024
- type: nauc_map_at_1000_max
value: 28.512500000000003
- type: nauc_map_at_1000_std
value: -13.851700000000001
- type: nauc_map_at_1000_diff1
value: 52.0139
- type: nauc_recall_at_1_max
value: 26.582099999999997
- type: nauc_recall_at_1_std
value: -14.751900000000001
- type: nauc_recall_at_1_diff1
value: 54.9795
- type: nauc_recall_at_3_max
value: 32.443
- type: nauc_recall_at_3_std
value: -11.6927
- type: nauc_recall_at_3_diff1
value: 49.568400000000004
- type: nauc_recall_at_5_max
value: 31.2258
- type: nauc_recall_at_5_std
value: -12.1296
- type: nauc_recall_at_5_diff1
value: 47.3057
- type: nauc_recall_at_10_max
value: 28.561999999999998
- type: nauc_recall_at_10_std
value: -14.103499999999999
- type: nauc_recall_at_10_diff1
value: 44.9228
- type: nauc_recall_at_20_max
value: 28.0738
- type: nauc_recall_at_20_std
value: -13.632
- type: nauc_recall_at_20_diff1
value: 45.6569
- type: nauc_recall_at_100_max
value: 29.9618
- type: nauc_recall_at_100_std
value: -6.2382
- type: nauc_recall_at_100_diff1
value: 44.1378
- type: nauc_recall_at_1000_max
value: 23.4062
- type: nauc_recall_at_1000_std
value: -11.6326
- type: nauc_recall_at_1000_diff1
value: 45.130199999999995
- type: nauc_precision_at_1_max
value: 26.582099999999997
- type: nauc_precision_at_1_std
value: -14.751900000000001
- type: nauc_precision_at_1_diff1
value: 54.9795
- type: nauc_precision_at_3_max
value: 32.443
- type: nauc_precision_at_3_std
value: -11.6927
- type: nauc_precision_at_3_diff1
value: 49.568400000000004
- type: nauc_precision_at_5_max
value: 31.2258
- type: nauc_precision_at_5_std
value: -12.1296
- type: nauc_precision_at_5_diff1
value: 47.3057
- type: nauc_precision_at_10_max
value: 28.561999999999998
- type: nauc_precision_at_10_std
value: -14.103499999999999
- type: nauc_precision_at_10_diff1
value: 44.9228
- type: nauc_precision_at_20_max
value: 28.0738
- type: nauc_precision_at_20_std
value: -13.632
- type: nauc_precision_at_20_diff1
value: 45.6569
- type: nauc_precision_at_100_max
value: 29.9618
- type: nauc_precision_at_100_std
value: -6.2382
- type: nauc_precision_at_100_diff1
value: 44.1378
- type: nauc_precision_at_1000_max
value: 23.4062
- type: nauc_precision_at_1000_std
value: -11.6326
- type: nauc_precision_at_1000_diff1
value: 45.130199999999995
- type: nauc_mrr_at_1_max
value: 26.571499999999997
- type: nauc_mrr_at_1_std
value: -14.9002
- type: nauc_mrr_at_1_diff1
value: 55.071400000000004
- type: nauc_mrr_at_3_max
value: 29.1956
- type: nauc_mrr_at_3_std
value: -13.6331
- type: nauc_mrr_at_3_diff1
value: 52.59439999999999
- type: nauc_mrr_at_5_max
value: 28.8688
- type: nauc_mrr_at_5_std
value: -13.7599
- type: nauc_mrr_at_5_diff1
value: 52.1832
- type: nauc_mrr_at_10_max
value: 28.556199999999997
- type: nauc_mrr_at_10_std
value: -13.9924
- type: nauc_mrr_at_10_diff1
value: 51.9865
- type: nauc_mrr_at_20_max
value: 28.523799999999998
- type: nauc_mrr_at_20_std
value: -13.960700000000001
- type: nauc_mrr_at_20_diff1
value: 52.0466
- type: nauc_mrr_at_100_max
value: 28.522
- type: nauc_mrr_at_100_std
value: -13.9076
- type: nauc_mrr_at_100_diff1
value: 52.058099999999996
- type: nauc_mrr_at_1000_max
value: 28.5161
- type: nauc_mrr_at_1000_std
value: -13.919500000000001
- type: nauc_mrr_at_1000_diff1
value: 52.0697
- type: main_score
value: 53.1
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (go)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: go
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 30.459999999999997
- type: ndcg_at_3
value: 37.88
- type: ndcg_at_5
value: 40.11
- type: ndcg_at_10
value: 42.094
- type: ndcg_at_20
value: 43.683
- type: ndcg_at_100
value: 45.998
- type: ndcg_at_1000
value: 47.723
- type: map_at_1
value: 30.459999999999997
- type: map_at_3
value: 36.046
- type: map_at_5
value: 37.285000000000004
- type: map_at_10
value: 38.108
- type: map_at_20
value: 38.546
- type: map_at_100
value: 38.859
- type: map_at_1000
value: 38.917
- type: recall_at_1
value: 30.459999999999997
- type: recall_at_3
value: 43.191
- type: recall_at_5
value: 48.596000000000004
- type: recall_at_10
value: 54.716
- type: recall_at_20
value: 60.983
- type: recall_at_100
value: 73.566
- type: recall_at_1000
value: 87.515
- type: precision_at_1
value: 30.459999999999997
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 9.719
- type: precision_at_10
value: 5.4719999999999995
- type: precision_at_20
value: 3.049
- type: precision_at_100
value: 0.736
- type: precision_at_1000
value: 0.08800000000000001
- type: mrr_at_1
value: 30.448199999999996
- type: mrr_at_3
value: 36.042
- type: mrr_at_5
value: 37.2763
- type: mrr_at_10
value: 38.1013
- type: mrr_at_20
value: 38.5373
- type: mrr_at_100
value: 38.8506
- type: mrr_at_1000
value: 38.9093
- type: nauc_ndcg_at_1_max
value: 27.284999999999997
- type: nauc_ndcg_at_1_std
value: -6.6476999999999995
- type: nauc_ndcg_at_1_diff1
value: 50.871500000000005
- type: nauc_ndcg_at_3_max
value: 26.6017
- type: nauc_ndcg_at_3_std
value: -7.6026
- type: nauc_ndcg_at_3_diff1
value: 46.768
- type: nauc_ndcg_at_5_max
value: 26.2865
- type: nauc_ndcg_at_5_std
value: -7.3601
- type: nauc_ndcg_at_5_diff1
value: 45.7969
- type: nauc_ndcg_at_10_max
value: 25.746599999999997
- type: nauc_ndcg_at_10_std
value: -7.4333
- type: nauc_ndcg_at_10_diff1
value: 45.4115
- type: nauc_ndcg_at_20_max
value: 25.5118
- type: nauc_ndcg_at_20_std
value: -6.9322
- type: nauc_ndcg_at_20_diff1
value: 45.0598
- type: nauc_ndcg_at_100_max
value: 25.309900000000003
- type: nauc_ndcg_at_100_std
value: -6.0600000000000005
- type: nauc_ndcg_at_100_diff1
value: 44.8825
- type: nauc_ndcg_at_1000_max
value: 25.521700000000003
- type: nauc_ndcg_at_1000_std
value: -5.9789
- type: nauc_ndcg_at_1000_diff1
value: 45.2513
- type: nauc_map_at_1_max
value: 27.284999999999997
- type: nauc_map_at_1_std
value: -6.6476999999999995
- type: nauc_map_at_1_diff1
value: 50.871500000000005
- type: nauc_map_at_3_max
value: 26.7721
- type: nauc_map_at_3_std
value: -7.452300000000001
- type: nauc_map_at_3_diff1
value: 47.7211
- type: nauc_map_at_5_max
value: 26.600600000000004
- type: nauc_map_at_5_std
value: -7.3378
- type: nauc_map_at_5_diff1
value: 47.1879
- type: nauc_map_at_10_max
value: 26.372
- type: nauc_map_at_10_std
value: -7.3735
- type: nauc_map_at_10_diff1
value: 47.0298
- type: nauc_map_at_20_max
value: 26.3071
- type: nauc_map_at_20_std
value: -7.2452000000000005
- type: nauc_map_at_20_diff1
value: 46.9294
- type: nauc_map_at_100_max
value: 26.281100000000002
- type: nauc_map_at_100_std
value: -7.1155
- type: nauc_map_at_100_diff1
value: 46.9054
- type: nauc_map_at_1000_max
value: 26.2903
- type: nauc_map_at_1000_std
value: -7.1089
- type: nauc_map_at_1000_diff1
value: 46.9182
- type: nauc_recall_at_1_max
value: 27.284999999999997
- type: nauc_recall_at_1_std
value: -6.6476999999999995
- type: nauc_recall_at_1_diff1
value: 50.871500000000005
- type: nauc_recall_at_3_max
value: 26.1146
- type: nauc_recall_at_3_std
value: -7.9985
- type: nauc_recall_at_3_diff1
value: 44.0707
- type: nauc_recall_at_5_max
value: 25.3292
- type: nauc_recall_at_5_std
value: -7.331799999999999
- type: nauc_recall_at_5_diff1
value: 41.6571
- type: nauc_recall_at_10_max
value: 23.6012
- type: nauc_recall_at_10_std
value: -7.5294
- type: nauc_recall_at_10_diff1
value: 40.244099999999996
- type: nauc_recall_at_20_max
value: 22.453300000000002
- type: nauc_recall_at_20_std
value: -5.3024000000000004
- type: nauc_recall_at_20_diff1
value: 38.4242
- type: nauc_recall_at_100_max
value: 20.069100000000002
- type: nauc_recall_at_100_std
value: 1.4581
- type: nauc_recall_at_100_diff1
value: 35.1775
- type: nauc_recall_at_1000_max
value: 19.4385
- type: nauc_recall_at_1000_std
value: 9.0112
- type: nauc_recall_at_1000_diff1
value: 34.138000000000005
- type: nauc_precision_at_1_max
value: 27.284999999999997
- type: nauc_precision_at_1_std
value: -6.6476999999999995
- type: nauc_precision_at_1_diff1
value: 50.871500000000005
- type: nauc_precision_at_3_max
value: 26.1146
- type: nauc_precision_at_3_std
value: -7.9985
- type: nauc_precision_at_3_diff1
value: 44.0707
- type: nauc_precision_at_5_max
value: 25.3292
- type: nauc_precision_at_5_std
value: -7.331799999999999
- type: nauc_precision_at_5_diff1
value: 41.6571
- type: nauc_precision_at_10_max
value: 23.6012
- type: nauc_precision_at_10_std
value: -7.5294
- type: nauc_precision_at_10_diff1
value: 40.244099999999996
- type: nauc_precision_at_20_max
value: 22.453300000000002
- type: nauc_precision_at_20_std
value: -5.3024000000000004
- type: nauc_precision_at_20_diff1
value: 38.4242
- type: nauc_precision_at_100_max
value: 20.069100000000002
- type: nauc_precision_at_100_std
value: 1.4581
- type: nauc_precision_at_100_diff1
value: 35.1775
- type: nauc_precision_at_1000_max
value: 19.4385
- type: nauc_precision_at_1000_std
value: 9.0112
- type: nauc_precision_at_1000_diff1
value: 34.138000000000005
- type: nauc_mrr_at_1_max
value: 27.334000000000003
- type: nauc_mrr_at_1_std
value: -6.5517
- type: nauc_mrr_at_1_diff1
value: 50.9102
- type: nauc_mrr_at_3_max
value: 26.807199999999998
- type: nauc_mrr_at_3_std
value: -7.436800000000001
- type: nauc_mrr_at_3_diff1
value: 47.7425
- type: nauc_mrr_at_5_max
value: 26.6194
- type: nauc_mrr_at_5_std
value: -7.3031
- type: nauc_mrr_at_5_diff1
value: 47.2053
- type: nauc_mrr_at_10_max
value: 26.3924
- type: nauc_mrr_at_10_std
value: -7.324700000000001
- type: nauc_mrr_at_10_diff1
value: 47.051500000000004
- type: nauc_mrr_at_20_max
value: 26.3274
- type: nauc_mrr_at_20_std
value: -7.209899999999999
- type: nauc_mrr_at_20_diff1
value: 46.953
- type: nauc_mrr_at_100_max
value: 26.3019
- type: nauc_mrr_at_100_std
value: -7.0785
- type: nauc_mrr_at_100_diff1
value: 46.9298
- type: nauc_mrr_at_1000_max
value: 26.311
- type: nauc_mrr_at_1000_std
value: -7.0719
- type: nauc_mrr_at_1000_diff1
value: 46.942499999999995
- type: main_score
value: 42.094
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (ruby)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: ruby
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 37.827
- type: ndcg_at_3
value: 47.599000000000004
- type: ndcg_at_5
value: 49.687
- type: ndcg_at_10
value: 51.686
- type: ndcg_at_20
value: 53.018
- type: ndcg_at_100
value: 54.75600000000001
- type: ndcg_at_1000
value: 56.196
- type: map_at_1
value: 37.827
- type: map_at_3
value: 45.242
- type: map_at_5
value: 46.400000000000006
- type: map_at_10
value: 47.223
- type: map_at_20
value: 47.593
- type: map_at_100
value: 47.824
- type: map_at_1000
value: 47.878
- type: recall_at_1
value: 37.827
- type: recall_at_3
value: 54.400999999999996
- type: recall_at_5
value: 59.477000000000004
- type: recall_at_10
value: 65.66199999999999
- type: recall_at_20
value: 70.896
- type: recall_at_100
value: 80.41199999999999
- type: recall_at_1000
value: 91.753
- type: precision_at_1
value: 37.827
- type: precision_at_3
value: 18.134
- type: precision_at_5
value: 11.895
- type: precision_at_10
value: 6.566
- type: precision_at_20
value: 3.5450000000000004
- type: precision_at_100
value: 0.804
- type: precision_at_1000
value: 0.092
- type: mrr_at_1
value: 37.8271
- type: mrr_at_3
value: 45.2154
- type: mrr_at_5
value: 46.3931
- type: mrr_at_10
value: 47.2166
- type: mrr_at_20
value: 47.5869
- type: mrr_at_100
value: 47.8167
- type: mrr_at_1000
value: 47.8715
- type: nauc_ndcg_at_1_max
value: 34.1998
- type: nauc_ndcg_at_1_std
value: -15.7415
- type: nauc_ndcg_at_1_diff1
value: 61.8572
- type: nauc_ndcg_at_3_max
value: 33.566
- type: nauc_ndcg_at_3_std
value: -18.0058
- type: nauc_ndcg_at_3_diff1
value: 54.5929
- type: nauc_ndcg_at_5_max
value: 34.0447
- type: nauc_ndcg_at_5_std
value: -17.3914
- type: nauc_ndcg_at_5_diff1
value: 53.980399999999996
- type: nauc_ndcg_at_10_max
value: 34.0521
- type: nauc_ndcg_at_10_std
value: -17.298099999999998
- type: nauc_ndcg_at_10_diff1
value: 53.63830000000001
- type: nauc_ndcg_at_20_max
value: 34.076499999999996
- type: nauc_ndcg_at_20_std
value: -17.1978
- type: nauc_ndcg_at_20_diff1
value: 53.3739
- type: nauc_ndcg_at_100_max
value: 33.9961
- type: nauc_ndcg_at_100_std
value: -17.0232
- type: nauc_ndcg_at_100_diff1
value: 53.8714
- type: nauc_ndcg_at_1000_max
value: 34.0269
- type: nauc_ndcg_at_1000_std
value: -16.6124
- type: nauc_ndcg_at_1000_diff1
value: 54.286199999999994
- type: nauc_map_at_1_max
value: 34.1998
- type: nauc_map_at_1_std
value: -15.7415
- type: nauc_map_at_1_diff1
value: 61.8572
- type: nauc_map_at_3_max
value: 33.8395
- type: nauc_map_at_3_std
value: -17.529
- type: nauc_map_at_3_diff1
value: 56.4065
- type: nauc_map_at_5_max
value: 34.1343
- type: nauc_map_at_5_std
value: -17.1732
- type: nauc_map_at_5_diff1
value: 56.1246
- type: nauc_map_at_10_max
value: 34.1717
- type: nauc_map_at_10_std
value: -17.1179
- type: nauc_map_at_10_diff1
value: 56.041399999999996
- type: nauc_map_at_20_max
value: 34.1895
- type: nauc_map_at_20_std
value: -17.077
- type: nauc_map_at_20_diff1
value: 55.96489999999999
- type: nauc_map_at_100_max
value: 34.1922
- type: nauc_map_at_100_std
value: -17.0664
- type: nauc_map_at_100_diff1
value: 56.0487
- type: nauc_map_at_1000_max
value: 34.186
- type: nauc_map_at_1000_std
value: -17.0498
- type: nauc_map_at_1000_diff1
value: 56.0623
- type: nauc_recall_at_1_max
value: 34.1998
- type: nauc_recall_at_1_std
value: -15.7415
- type: nauc_recall_at_1_diff1
value: 61.8572
- type: nauc_recall_at_3_max
value: 32.6911
- type: nauc_recall_at_3_std
value: -19.4073
- type: nauc_recall_at_3_diff1
value: 49.1188
- type: nauc_recall_at_5_max
value: 33.7416
- type: nauc_recall_at_5_std
value: -17.965700000000002
- type: nauc_recall_at_5_diff1
value: 47.0821
- type: nauc_recall_at_10_max
value: 33.5209
- type: nauc_recall_at_10_std
value: -17.7965
- type: nauc_recall_at_10_diff1
value: 44.8874
- type: nauc_recall_at_20_max
value: 33.4757
- type: nauc_recall_at_20_std
value: -17.4921
- type: nauc_recall_at_20_diff1
value: 42.747
- type: nauc_recall_at_100_max
value: 32.2069
- type: nauc_recall_at_100_std
value: -15.6244
- type: nauc_recall_at_100_diff1
value: 43.0441
- type: nauc_recall_at_1000_max
value: 32.428000000000004
- type: nauc_recall_at_1000_std
value: -2.6172
- type: nauc_recall_at_1000_diff1
value: 42.1384
- type: nauc_precision_at_1_max
value: 34.1998
- type: nauc_precision_at_1_std
value: -15.7415
- type: nauc_precision_at_1_diff1
value: 61.8572
- type: nauc_precision_at_3_max
value: 32.6911
- type: nauc_precision_at_3_std
value: -19.4073
- type: nauc_precision_at_3_diff1
value: 49.1188
- type: nauc_precision_at_5_max
value: 33.7416
- type: nauc_precision_at_5_std
value: -17.965700000000002
- type: nauc_precision_at_5_diff1
value: 47.0821
- type: nauc_precision_at_10_max
value: 33.5209
- type: nauc_precision_at_10_std
value: -17.7965
- type: nauc_precision_at_10_diff1
value: 44.8874
- type: nauc_precision_at_20_max
value: 33.4757
- type: nauc_precision_at_20_std
value: -17.4921
- type: nauc_precision_at_20_diff1
value: 42.747
- type: nauc_precision_at_100_max
value: 32.2069
- type: nauc_precision_at_100_std
value: -15.6244
- type: nauc_precision_at_100_diff1
value: 43.0441
- type: nauc_precision_at_1000_max
value: 32.428000000000004
- type: nauc_precision_at_1000_std
value: -2.6172
- type: nauc_precision_at_1000_diff1
value: 42.1384
- type: nauc_mrr_at_1_max
value: 34.5467
- type: nauc_mrr_at_1_std
value: -15.676499999999999
- type: nauc_mrr_at_1_diff1
value: 61.8572
- type: nauc_mrr_at_3_max
value: 34.0355
- type: nauc_mrr_at_3_std
value: -17.448900000000002
- type: nauc_mrr_at_3_diff1
value: 56.4005
- type: nauc_mrr_at_5_max
value: 34.319100000000006
- type: nauc_mrr_at_5_std
value: -17.1276
- type: nauc_mrr_at_5_diff1
value: 56.1231
- type: nauc_mrr_at_10_max
value: 34.3588
- type: nauc_mrr_at_10_std
value: -17.0717
- type: nauc_mrr_at_10_diff1
value: 56.03979999999999
- type: nauc_mrr_at_20_max
value: 34.3778
- type: nauc_mrr_at_20_std
value: -17.0305
- type: nauc_mrr_at_20_diff1
value: 55.96339999999999
- type: nauc_mrr_at_100_max
value: 34.3812
- type: nauc_mrr_at_100_std
value: -17.022599999999997
- type: nauc_mrr_at_100_diff1
value: 56.0469
- type: nauc_mrr_at_1000_max
value: 34.375
- type: nauc_mrr_at_1000_std
value: -17.0037
- type: nauc_mrr_at_1000_diff1
value: 56.0608
- type: main_score
value: 51.686
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (java)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: java
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 39.744
- type: ndcg_at_3
value: 48.465
- type: ndcg_at_5
value: 50.615
- type: ndcg_at_10
value: 52.544000000000004
- type: ndcg_at_20
value: 53.864999999999995
- type: ndcg_at_100
value: 55.806
- type: ndcg_at_1000
value: 57.082
- type: map_at_1
value: 39.744
- type: map_at_3
value: 46.346
- type: map_at_5
value: 47.538000000000004
- type: map_at_10
value: 48.333999999999996
- type: map_at_20
value: 48.699999999999996
- type: map_at_100
value: 48.97
- type: map_at_1000
value: 49.014
- type: recall_at_1
value: 39.744
- type: recall_at_3
value: 54.586999999999996
- type: recall_at_5
value: 59.80799999999999
- type: recall_at_10
value: 65.778
- type: recall_at_20
value: 70.97200000000001
- type: recall_at_100
value: 81.415
- type: recall_at_1000
value: 91.702
- type: precision_at_1
value: 39.744
- type: precision_at_3
value: 18.196
- type: precision_at_5
value: 11.962
- type: precision_at_10
value: 6.578
- type: precision_at_20
value: 3.549
- type: precision_at_100
value: 0.814
- type: precision_at_1000
value: 0.092
- type: mrr_at_1
value: 39.7901
- type: mrr_at_3
value: 46.367000000000004
- type: mrr_at_5
value: 47.556799999999996
- type: mrr_at_10
value: 48.3531
- type: mrr_at_20
value: 48.7206
- type: mrr_at_100
value: 48.9901
- type: mrr_at_1000
value: 49.034
- type: nauc_ndcg_at_1_max
value: 31.1431
- type: nauc_ndcg_at_1_std
value: -10.407399999999999
- type: nauc_ndcg_at_1_diff1
value: 56.6466
- type: nauc_ndcg_at_3_max
value: 33.022800000000004
- type: nauc_ndcg_at_3_std
value: -9.5046
- type: nauc_ndcg_at_3_diff1
value: 52.7916
- type: nauc_ndcg_at_5_max
value: 33.1721
- type: nauc_ndcg_at_5_std
value: -9.0365
- type: nauc_ndcg_at_5_diff1
value: 52.317400000000006
- type: nauc_ndcg_at_10_max
value: 33.1837
- type: nauc_ndcg_at_10_std
value: -8.4008
- type: nauc_ndcg_at_10_diff1
value: 52.007999999999996
- type: nauc_ndcg_at_20_max
value: 33.024
- type: nauc_ndcg_at_20_std
value: -7.9246
- type: nauc_ndcg_at_20_diff1
value: 51.9078
- type: nauc_ndcg_at_100_max
value: 32.962599999999995
- type: nauc_ndcg_at_100_std
value: -7.4719
- type: nauc_ndcg_at_100_diff1
value: 51.94180000000001
- type: nauc_ndcg_at_1000_max
value: 33.1905
- type: nauc_ndcg_at_1000_std
value: -7.295599999999999
- type: nauc_ndcg_at_1000_diff1
value: 52.351099999999995
- type: nauc_map_at_1_max
value: 31.1431
- type: nauc_map_at_1_std
value: -10.407399999999999
- type: nauc_map_at_1_diff1
value: 56.6466
- type: nauc_map_at_3_max
value: 32.5713
- type: nauc_map_at_3_std
value: -9.734
- type: nauc_map_at_3_diff1
value: 53.703599999999994
- type: nauc_map_at_5_max
value: 32.6494
- type: nauc_map_at_5_std
value: -9.4813
- type: nauc_map_at_5_diff1
value: 53.4567
- type: nauc_map_at_10_max
value: 32.664100000000005
- type: nauc_map_at_10_std
value: -9.225999999999999
- type: nauc_map_at_10_diff1
value: 53.3589
- type: nauc_map_at_20_max
value: 32.6136
- type: nauc_map_at_20_std
value: -9.107899999999999
- type: nauc_map_at_20_diff1
value: 53.337
- type: nauc_map_at_100_max
value: 32.6036
- type: nauc_map_at_100_std
value: -9.0547
- type: nauc_map_at_100_diff1
value: 53.35339999999999
- type: nauc_map_at_1000_max
value: 32.610299999999995
- type: nauc_map_at_1000_std
value: -9.0493
- type: nauc_map_at_1000_diff1
value: 53.3656
- type: nauc_recall_at_1_max
value: 31.1431
- type: nauc_recall_at_1_std
value: -10.407399999999999
- type: nauc_recall_at_1_diff1
value: 56.6466
- type: nauc_recall_at_3_max
value: 34.3846
- type: nauc_recall_at_3_std
value: -8.8071
- type: nauc_recall_at_3_diff1
value: 50.047
- type: nauc_recall_at_5_max
value: 34.8431
- type: nauc_recall_at_5_std
value: -7.550999999999999
- type: nauc_recall_at_5_diff1
value: 48.6504
- type: nauc_recall_at_10_max
value: 34.9686
- type: nauc_recall_at_10_std
value: -5.1544
- type: nauc_recall_at_10_diff1
value: 47.0462
- type: nauc_recall_at_20_max
value: 34.441300000000005
- type: nauc_recall_at_20_std
value: -2.3698
- type: nauc_recall_at_20_diff1
value: 45.9903
- type: nauc_recall_at_100_max
value: 34.4855
- type: nauc_recall_at_100_std
value: 4.2675
- type: nauc_recall_at_100_diff1
value: 43.5966
- type: nauc_recall_at_1000_max
value: 42.692600000000006
- type: nauc_recall_at_1000_std
value: 21.8632
- type: nauc_recall_at_1000_diff1
value: 46.5143
- type: nauc_precision_at_1_max
value: 31.1431
- type: nauc_precision_at_1_std
value: -10.407399999999999
- type: nauc_precision_at_1_diff1
value: 56.6466
- type: nauc_precision_at_3_max
value: 34.3846
- type: nauc_precision_at_3_std
value: -8.8071
- type: nauc_precision_at_3_diff1
value: 50.047
- type: nauc_precision_at_5_max
value: 34.8431
- type: nauc_precision_at_5_std
value: -7.550999999999999
- type: nauc_precision_at_5_diff1
value: 48.6504
- type: nauc_precision_at_10_max
value: 34.9686
- type: nauc_precision_at_10_std
value: -5.1544
- type: nauc_precision_at_10_diff1
value: 47.0462
- type: nauc_precision_at_20_max
value: 34.441300000000005
- type: nauc_precision_at_20_std
value: -2.3698
- type: nauc_precision_at_20_diff1
value: 45.9903
- type: nauc_precision_at_100_max
value: 34.4855
- type: nauc_precision_at_100_std
value: 4.2675
- type: nauc_precision_at_100_diff1
value: 43.5966
- type: nauc_precision_at_1000_max
value: 42.692600000000006
- type: nauc_precision_at_1000_std
value: 21.8632
- type: nauc_precision_at_1000_diff1
value: 46.5143
- type: nauc_mrr_at_1_max
value: 31.1816
- type: nauc_mrr_at_1_std
value: -10.2945
- type: nauc_mrr_at_1_diff1
value: 56.5084
- type: nauc_mrr_at_3_max
value: 32.609300000000005
- type: nauc_mrr_at_3_std
value: -9.6538
- type: nauc_mrr_at_3_diff1
value: 53.6187
- type: nauc_mrr_at_5_max
value: 32.6863
- type: nauc_mrr_at_5_std
value: -9.3972
- type: nauc_mrr_at_5_diff1
value: 53.378400000000006
- type: nauc_mrr_at_10_max
value: 32.697700000000005
- type: nauc_mrr_at_10_std
value: -9.1456
- type: nauc_mrr_at_10_diff1
value: 53.2796
- type: nauc_mrr_at_20_max
value: 32.6496
- type: nauc_mrr_at_20_std
value: -9.0244
- type: nauc_mrr_at_20_diff1
value: 53.257600000000004
- type: nauc_mrr_at_100_max
value: 32.6402
- type: nauc_mrr_at_100_std
value: -8.970799999999999
- type: nauc_mrr_at_100_diff1
value: 53.274100000000004
- type: nauc_mrr_at_1000_max
value: 32.647
- type: nauc_mrr_at_1000_std
value: -8.9653
- type: nauc_mrr_at_1000_diff1
value: 53.286100000000005
- type: main_score
value: 52.544000000000004
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetCCRetrieval (php)
type: CoIR-Retrieval/CodeSearchNet-ccr
config: php
split: test
revision: 6e1effa2c03723c5fde48ee912b5ee08d4f211e8
metrics:
- type: ndcg_at_1
value: 29.685
- type: ndcg_at_3
value: 37.448
- type: ndcg_at_5
value: 39.781
- type: ndcg_at_10
value: 41.814
- type: ndcg_at_20
value: 43.333
- type: ndcg_at_100
value: 45.664
- type: ndcg_at_1000
value: 47.536
- type: map_at_1
value: 29.685
- type: map_at_3
value: 35.545
- type: map_at_5
value: 36.839
- type: map_at_10
value: 37.682
- type: map_at_20
value: 38.099
- type: map_at_100
value: 38.415
- type: map_at_1000
value: 38.478
- type: recall_at_1
value: 29.685
- type: recall_at_3
value: 42.95
- type: recall_at_5
value: 48.616
- type: recall_at_10
value: 54.888000000000005
- type: recall_at_20
value: 60.895999999999994
- type: recall_at_100
value: 73.548
- type: recall_at_1000
value: 88.697
- type: precision_at_1
value: 29.685
- type: precision_at_3
value: 14.316999999999998
- type: precision_at_5
value: 9.722999999999999
- type: precision_at_10
value: 5.489
- type: precision_at_20
value: 3.045
- type: precision_at_100
value: 0.735
- type: precision_at_1000
value: 0.089
- type: mrr_at_1
value: 29.6489
- type: mrr_at_3
value: 35.5299
- type: mrr_at_5
value: 36.8133
- type: mrr_at_10
value: 37.6632
- type: mrr_at_20
value: 38.079299999999996
- type: mrr_at_100
value: 38.3951
- type: mrr_at_1000
value: 38.4584
- type: nauc_ndcg_at_1_max
value: 23.1966
- type: nauc_ndcg_at_1_std
value: -9.4926
- type: nauc_ndcg_at_1_diff1
value: 50.2664
- type: nauc_ndcg_at_3_max
value: 22.9114
- type: nauc_ndcg_at_3_std
value: -9.3945
- type: nauc_ndcg_at_3_diff1
value: 45.266400000000004
- type: nauc_ndcg_at_5_max
value: 22.2736
- type: nauc_ndcg_at_5_std
value: -9.1173
- type: nauc_ndcg_at_5_diff1
value: 44.1003
- type: nauc_ndcg_at_10_max
value: 22.0212
- type: nauc_ndcg_at_10_std
value: -8.5559
- type: nauc_ndcg_at_10_diff1
value: 43.5542
- type: nauc_ndcg_at_20_max
value: 21.5977
- type: nauc_ndcg_at_20_std
value: -8.236400000000001
- type: nauc_ndcg_at_20_diff1
value: 43.1564
- type: nauc_ndcg_at_100_max
value: 21.4543
- type: nauc_ndcg_at_100_std
value: -7.5462
- type: nauc_ndcg_at_100_diff1
value: 43.1768
- type: nauc_ndcg_at_1000_max
value: 21.6202
- type: nauc_ndcg_at_1000_std
value: -7.5571
- type: nauc_ndcg_at_1000_diff1
value: 43.5388
- type: nauc_map_at_1_max
value: 23.1966
- type: nauc_map_at_1_std
value: -9.4926
- type: nauc_map_at_1_diff1
value: 50.2664
- type: nauc_map_at_3_max
value: 23.0018
- type: nauc_map_at_3_std
value: -9.4391
- type: nauc_map_at_3_diff1
value: 46.428000000000004
- type: nauc_map_at_5_max
value: 22.642300000000002
- type: nauc_map_at_5_std
value: -9.2849
- type: nauc_map_at_5_diff1
value: 45.776
- type: nauc_map_at_10_max
value: 22.551099999999998
- type: nauc_map_at_10_std
value: -9.045300000000001
- type: nauc_map_at_10_diff1
value: 45.5645
- type: nauc_map_at_20_max
value: 22.4407
- type: nauc_map_at_20_std
value: -8.9542
- type: nauc_map_at_20_diff1
value: 45.4588
- type: nauc_map_at_100_max
value: 22.4247
- type: nauc_map_at_100_std
value: -8.869299999999999
- type: nauc_map_at_100_diff1
value: 45.467200000000005
- type: nauc_map_at_1000_max
value: 22.429299999999998
- type: nauc_map_at_1000_std
value: -8.8653
- type: nauc_map_at_1000_diff1
value: 45.479
- type: nauc_recall_at_1_max
value: 23.1966
- type: nauc_recall_at_1_std
value: -9.4926
- type: nauc_recall_at_1_diff1
value: 50.2664
- type: nauc_recall_at_3_max
value: 22.6466
- type: nauc_recall_at_3_std
value: -9.259599999999999
- type: nauc_recall_at_3_diff1
value: 41.9917
- type: nauc_recall_at_5_max
value: 21.121100000000002
- type: nauc_recall_at_5_std
value: -8.5882
- type: nauc_recall_at_5_diff1
value: 39.1445
- type: nauc_recall_at_10_max
value: 20.191200000000002
- type: nauc_recall_at_10_std
value: -6.824
- type: nauc_recall_at_10_diff1
value: 37.107
- type: nauc_recall_at_20_max
value: 18.2104
- type: nauc_recall_at_20_std
value: -5.3749
- type: nauc_recall_at_20_diff1
value: 34.9673
- type: nauc_recall_at_100_max
value: 16.0859
- type: nauc_recall_at_100_std
value: 0.7539
- type: nauc_recall_at_100_diff1
value: 32.603500000000004
- type: nauc_recall_at_1000_max
value: 14.1642
- type: nauc_recall_at_1000_std
value: 8.5463
- type: nauc_recall_at_1000_diff1
value: 29.5927
- type: nauc_precision_at_1_max
value: 23.1966
- type: nauc_precision_at_1_std
value: -9.4926
- type: nauc_precision_at_1_diff1
value: 50.2664
- type: nauc_precision_at_3_max
value: 22.6466
- type: nauc_precision_at_3_std
value: -9.259599999999999
- type: nauc_precision_at_3_diff1
value: 41.9917
- type: nauc_precision_at_5_max
value: 21.121100000000002
- type: nauc_precision_at_5_std
value: -8.5882
- type: nauc_precision_at_5_diff1
value: 39.1445
- type: nauc_precision_at_10_max
value: 20.191200000000002
- type: nauc_precision_at_10_std
value: -6.824
- type: nauc_precision_at_10_diff1
value: 37.107
- type: nauc_precision_at_20_max
value: 18.2104
- type: nauc_precision_at_20_std
value: -5.3749
- type: nauc_precision_at_20_diff1
value: 34.9673
- type: nauc_precision_at_100_max
value: 16.0859
- type: nauc_precision_at_100_std
value: 0.7539
- type: nauc_precision_at_100_diff1
value: 32.603500000000004
- type: nauc_precision_at_1000_max
value: 14.1642
- type: nauc_precision_at_1000_std
value: 8.5463
- type: nauc_precision_at_1000_diff1
value: 29.5927
- type: nauc_mrr_at_1_max
value: 23.2502
- type: nauc_mrr_at_1_std
value: -9.507
- type: nauc_mrr_at_1_diff1
value: 50.3997
- type: nauc_mrr_at_3_max
value: 23.009
- type: nauc_mrr_at_3_std
value: -9.4541
- type: nauc_mrr_at_3_diff1
value: 46.4733
- type: nauc_mrr_at_5_max
value: 22.656000000000002
- type: nauc_mrr_at_5_std
value: -9.2987
- type: nauc_mrr_at_5_diff1
value: 45.839999999999996
- type: nauc_mrr_at_10_max
value: 22.5697
- type: nauc_mrr_at_10_std
value: -9.0543
- type: nauc_mrr_at_10_diff1
value: 45.618700000000004
- type: nauc_mrr_at_20_max
value: 22.461000000000002
- type: nauc_mrr_at_20_std
value: -8.9628
- type: nauc_mrr_at_20_diff1
value: 45.5146
- type: nauc_mrr_at_100_max
value: 22.4449
- type: nauc_mrr_at_100_std
value: -8.877699999999999
- type: nauc_mrr_at_100_diff1
value: 45.5229
- type: nauc_mrr_at_1000_max
value: 22.4498
- type: nauc_mrr_at_1000_std
value: -8.873899999999999
- type: nauc_mrr_at_1000_diff1
value: 45.535199999999996
- type: main_score
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (python)
type: code-search-net/code_search_net
config: python
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 73.5
- type: ndcg_at_3
value: 82.35900000000001
- type: ndcg_at_5
value: 83.543
- type: ndcg_at_10
value: 84.357
- type: ndcg_at_20
value: 84.973
- type: ndcg_at_100
value: 85.449
- type: ndcg_at_1000
value: 85.591
- type: map_at_1
value: 73.5
- type: map_at_3
value: 80.2
- type: map_at_5
value: 80.85
- type: map_at_10
value: 81.189
- type: map_at_20
value: 81.364
- type: map_at_100
value: 81.434
- type: map_at_1000
value: 81.44
- type: recall_at_1
value: 73.5
- type: recall_at_3
value: 88.6
- type: recall_at_5
value: 91.5
- type: recall_at_10
value: 94.0
- type: recall_at_20
value: 96.39999999999999
- type: recall_at_100
value: 98.9
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 73.5
- type: precision_at_3
value: 29.532999999999998
- type: precision_at_5
value: 18.3
- type: precision_at_10
value: 9.4
- type: precision_at_20
value: 4.82
- type: precision_at_100
value: 0.989
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 73.5
- type: mrr_at_3
value: 80.2
- type: mrr_at_5
value: 80.85
- type: mrr_at_10
value: 81.1894
- type: mrr_at_20
value: 81.3638
- type: mrr_at_100
value: 81.43430000000001
- type: mrr_at_1000
value: 81.44
- type: nauc_ndcg_at_1_max
value: 45.553
- type: nauc_ndcg_at_1_std
value: -3.8149
- type: nauc_ndcg_at_1_diff1
value: 72.4638
- type: nauc_ndcg_at_3_max
value: 47.8454
- type: nauc_ndcg_at_3_std
value: -3.2174
- type: nauc_ndcg_at_3_diff1
value: 69.05059999999999
- type: nauc_ndcg_at_5_max
value: 48.105599999999995
- type: nauc_ndcg_at_5_std
value: -3.0107
- type: nauc_ndcg_at_5_diff1
value: 70.2436
- type: nauc_ndcg_at_10_max
value: 48.871900000000004
- type: nauc_ndcg_at_10_std
value: -2.7289
- type: nauc_ndcg_at_10_diff1
value: 70.87440000000001
- type: nauc_ndcg_at_20_max
value: 49.1441
- type: nauc_ndcg_at_20_std
value: -2.2193
- type: nauc_ndcg_at_20_diff1
value: 70.9602
- type: nauc_ndcg_at_100_max
value: 48.2597
- type: nauc_ndcg_at_100_std
value: -2.8648
- type: nauc_ndcg_at_100_diff1
value: 70.5487
- type: nauc_ndcg_at_1000_max
value: 48.0576
- type: nauc_ndcg_at_1000_std
value: -3.0315000000000003
- type: nauc_ndcg_at_1000_diff1
value: 70.8214
- type: nauc_map_at_1_max
value: 45.553
- type: nauc_map_at_1_std
value: -3.8149
- type: nauc_map_at_1_diff1
value: 72.4638
- type: nauc_map_at_3_max
value: 47.143
- type: nauc_map_at_3_std
value: -3.4511
- type: nauc_map_at_3_diff1
value: 70.2411
- type: nauc_map_at_5_max
value: 47.2524
- type: nauc_map_at_5_std
value: -3.3834999999999997
- type: nauc_map_at_5_diff1
value: 70.8691
- type: nauc_map_at_10_max
value: 47.5215
- type: nauc_map_at_10_std
value: -3.3042000000000002
- type: nauc_map_at_10_diff1
value: 71.1041
- type: nauc_map_at_20_max
value: 47.5871
- type: nauc_map_at_20_std
value: -3.1888
- type: nauc_map_at_20_diff1
value: 71.1157
- type: nauc_map_at_100_max
value: 47.4746
- type: nauc_map_at_100_std
value: -3.3092
- type: nauc_map_at_100_diff1
value: 71.0626
- type: nauc_map_at_1000_max
value: 47.4686
- type: nauc_map_at_1000_std
value: -3.3099000000000003
- type: nauc_map_at_1000_diff1
value: 71.0712
- type: nauc_recall_at_1_max
value: 45.553
- type: nauc_recall_at_1_std
value: -3.8149
- type: nauc_recall_at_1_diff1
value: 72.4638
- type: nauc_recall_at_3_max
value: 51.09590000000001
- type: nauc_recall_at_3_std
value: -2.1018
- type: nauc_recall_at_3_diff1
value: 63.4433
- type: nauc_recall_at_5_max
value: 53.195499999999996
- type: nauc_recall_at_5_std
value: -0.6421
- type: nauc_recall_at_5_diff1
value: 66.7381
- type: nauc_recall_at_10_max
value: 60.660599999999995
- type: nauc_recall_at_10_std
value: 2.5576000000000003
- type: nauc_recall_at_10_diff1
value: 69.8771
- type: nauc_recall_at_20_max
value: 72.0082
- type: nauc_recall_at_20_std
value: 13.519300000000001
- type: nauc_recall_at_20_diff1
value: 70.8774
- type: nauc_recall_at_100_max
value: 67.6683
- type: nauc_recall_at_100_std
value: 16.4757
- type: nauc_recall_at_100_diff1
value: 45.535199999999996
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 45.553
- type: nauc_precision_at_1_std
value: -3.8149
- type: nauc_precision_at_1_diff1
value: 72.4638
- type: nauc_precision_at_3_max
value: 51.09590000000001
- type: nauc_precision_at_3_std
value: -2.1018
- type: nauc_precision_at_3_diff1
value: 63.4433
- type: nauc_precision_at_5_max
value: 53.195499999999996
- type: nauc_precision_at_5_std
value: -0.6421
- type: nauc_precision_at_5_diff1
value: 66.7381
- type: nauc_precision_at_10_max
value: 60.660599999999995
- type: nauc_precision_at_10_std
value: 2.5576000000000003
- type: nauc_precision_at_10_diff1
value: 69.8771
- type: nauc_precision_at_20_max
value: 72.0082
- type: nauc_precision_at_20_std
value: 13.519300000000001
- type: nauc_precision_at_20_diff1
value: 70.8774
- type: nauc_precision_at_100_max
value: 67.6683
- type: nauc_precision_at_100_std
value: 16.4757
- type: nauc_precision_at_100_diff1
value: 45.535199999999996
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 45.553
- type: nauc_mrr_at_1_std
value: -3.8149
- type: nauc_mrr_at_1_diff1
value: 72.4638
- type: nauc_mrr_at_3_max
value: 47.143
- type: nauc_mrr_at_3_std
value: -3.4511
- type: nauc_mrr_at_3_diff1
value: 70.2411
- type: nauc_mrr_at_5_max
value: 47.2524
- type: nauc_mrr_at_5_std
value: -3.3834999999999997
- type: nauc_mrr_at_5_diff1
value: 70.8691
- type: nauc_mrr_at_10_max
value: 47.5215
- type: nauc_mrr_at_10_std
value: -3.3042000000000002
- type: nauc_mrr_at_10_diff1
value: 71.1041
- type: nauc_mrr_at_20_max
value: 47.5871
- type: nauc_mrr_at_20_std
value: -3.1888
- type: nauc_mrr_at_20_diff1
value: 71.1157
- type: nauc_mrr_at_100_max
value: 47.4746
- type: nauc_mrr_at_100_std
value: -3.3092
- type: nauc_mrr_at_100_diff1
value: 71.0626
- type: nauc_mrr_at_1000_max
value: 47.4686
- type: nauc_mrr_at_1000_std
value: -3.3099000000000003
- type: nauc_mrr_at_1000_diff1
value: 71.0712
- type: main_score
value: 84.357
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (javascript)
type: code-search-net/code_search_net
config: javascript
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 59.4
- type: ndcg_at_3
value: 68.58800000000001
- type: ndcg_at_5
value: 70.0
- type: ndcg_at_10
value: 71.384
- type: ndcg_at_20
value: 72.505
- type: ndcg_at_100
value: 73.532
- type: ndcg_at_1000
value: 74.414
- type: map_at_1
value: 59.4
- type: map_at_3
value: 66.367
- type: map_at_5
value: 67.157
- type: map_at_10
value: 67.72399999999999
- type: map_at_20
value: 68.036
- type: map_at_100
value: 68.182
- type: map_at_1000
value: 68.208
- type: recall_at_1
value: 59.4
- type: recall_at_3
value: 75.0
- type: recall_at_5
value: 78.4
- type: recall_at_10
value: 82.69999999999999
- type: recall_at_20
value: 87.1
- type: recall_at_100
value: 92.60000000000001
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 59.4
- type: precision_at_3
value: 25.0
- type: precision_at_5
value: 15.68
- type: precision_at_10
value: 8.27
- type: precision_at_20
value: 4.3549999999999995
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 59.4
- type: mrr_at_3
value: 66.3667
- type: mrr_at_5
value: 67.1567
- type: mrr_at_10
value: 67.72399999999999
- type: mrr_at_20
value: 68.036
- type: mrr_at_100
value: 68.1821
- type: mrr_at_1000
value: 68.20779999999999
- type: nauc_ndcg_at_1_max
value: 55.2077
- type: nauc_ndcg_at_1_std
value: 23.8385
- type: nauc_ndcg_at_1_diff1
value: 72.8827
- type: nauc_ndcg_at_3_max
value: 62.495
- type: nauc_ndcg_at_3_std
value: 31.867800000000003
- type: nauc_ndcg_at_3_diff1
value: 69.8148
- type: nauc_ndcg_at_5_max
value: 63.132999999999996
- type: nauc_ndcg_at_5_std
value: 33.3486
- type: nauc_ndcg_at_5_diff1
value: 69.8501
- type: nauc_ndcg_at_10_max
value: 64.3507
- type: nauc_ndcg_at_10_std
value: 36.4767
- type: nauc_ndcg_at_10_diff1
value: 69.5995
- type: nauc_ndcg_at_20_max
value: 63.930299999999995
- type: nauc_ndcg_at_20_std
value: 36.8457
- type: nauc_ndcg_at_20_diff1
value: 70.0822
- type: nauc_ndcg_at_100_max
value: 63.10249999999999
- type: nauc_ndcg_at_100_std
value: 36.4228
- type: nauc_ndcg_at_100_diff1
value: 70.0219
- type: nauc_ndcg_at_1000_max
value: 62.3826
- type: nauc_ndcg_at_1000_std
value: 34.2464
- type: nauc_ndcg_at_1000_diff1
value: 70.2371
- type: nauc_map_at_1_max
value: 55.2077
- type: nauc_map_at_1_std
value: 23.8385
- type: nauc_map_at_1_diff1
value: 72.8827
- type: nauc_map_at_3_max
value: 60.4208
- type: nauc_map_at_3_std
value: 29.6445
- type: nauc_map_at_3_diff1
value: 70.58630000000001
- type: nauc_map_at_5_max
value: 60.709900000000005
- type: nauc_map_at_5_std
value: 30.400899999999996
- type: nauc_map_at_5_diff1
value: 70.6255
- type: nauc_map_at_10_max
value: 61.152499999999996
- type: nauc_map_at_10_std
value: 31.550800000000002
- type: nauc_map_at_10_diff1
value: 70.56099999999999
- type: nauc_map_at_20_max
value: 61.0075
- type: nauc_map_at_20_std
value: 31.585600000000003
- type: nauc_map_at_20_diff1
value: 70.6649
- type: nauc_map_at_100_max
value: 60.90370000000001
- type: nauc_map_at_100_std
value: 31.510700000000003
- type: nauc_map_at_100_diff1
value: 70.66839999999999
- type: nauc_map_at_1000_max
value: 60.8865
- type: nauc_map_at_1000_std
value: 31.4572
- type: nauc_map_at_1000_diff1
value: 70.6705
- type: nauc_recall_at_1_max
value: 55.2077
- type: nauc_recall_at_1_std
value: 23.8385
- type: nauc_recall_at_1_diff1
value: 72.8827
- type: nauc_recall_at_3_max
value: 69.92819999999999
- type: nauc_recall_at_3_std
value: 39.8045
- type: nauc_recall_at_3_diff1
value: 67.10040000000001
- type: nauc_recall_at_5_max
value: 72.8013
- type: nauc_recall_at_5_std
value: 45.1476
- type: nauc_recall_at_5_diff1
value: 66.84790000000001
- type: nauc_recall_at_10_max
value: 80.1828
- type: nauc_recall_at_10_std
value: 61.6781
- type: nauc_recall_at_10_diff1
value: 64.9272
- type: nauc_recall_at_20_max
value: 82.11840000000001
- type: nauc_recall_at_20_std
value: 72.1146
- type: nauc_recall_at_20_diff1
value: 67.3756
- type: nauc_recall_at_100_max
value: 80.8836
- type: nauc_recall_at_100_std
value: 89.47810000000001
- type: nauc_recall_at_100_diff1
value: 64.169
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 55.2077
- type: nauc_precision_at_1_std
value: 23.8385
- type: nauc_precision_at_1_diff1
value: 72.8827
- type: nauc_precision_at_3_max
value: 69.92819999999999
- type: nauc_precision_at_3_std
value: 39.8045
- type: nauc_precision_at_3_diff1
value: 67.10040000000001
- type: nauc_precision_at_5_max
value: 72.8013
- type: nauc_precision_at_5_std
value: 45.1476
- type: nauc_precision_at_5_diff1
value: 66.84790000000001
- type: nauc_precision_at_10_max
value: 80.1828
- type: nauc_precision_at_10_std
value: 61.6781
- type: nauc_precision_at_10_diff1
value: 64.9272
- type: nauc_precision_at_20_max
value: 82.11840000000001
- type: nauc_precision_at_20_std
value: 72.1146
- type: nauc_precision_at_20_diff1
value: 67.3756
- type: nauc_precision_at_100_max
value: 80.8836
- type: nauc_precision_at_100_std
value: 89.47810000000001
- type: nauc_precision_at_100_diff1
value: 64.169
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 55.2077
- type: nauc_mrr_at_1_std
value: 23.8385
- type: nauc_mrr_at_1_diff1
value: 72.8827
- type: nauc_mrr_at_3_max
value: 60.4208
- type: nauc_mrr_at_3_std
value: 29.6445
- type: nauc_mrr_at_3_diff1
value: 70.58630000000001
- type: nauc_mrr_at_5_max
value: 60.709900000000005
- type: nauc_mrr_at_5_std
value: 30.400899999999996
- type: nauc_mrr_at_5_diff1
value: 70.6255
- type: nauc_mrr_at_10_max
value: 61.152499999999996
- type: nauc_mrr_at_10_std
value: 31.550800000000002
- type: nauc_mrr_at_10_diff1
value: 70.56099999999999
- type: nauc_mrr_at_20_max
value: 61.0075
- type: nauc_mrr_at_20_std
value: 31.585600000000003
- type: nauc_mrr_at_20_diff1
value: 70.6649
- type: nauc_mrr_at_100_max
value: 60.90370000000001
- type: nauc_mrr_at_100_std
value: 31.510700000000003
- type: nauc_mrr_at_100_diff1
value: 70.66839999999999
- type: nauc_mrr_at_1000_max
value: 60.8865
- type: nauc_mrr_at_1000_std
value: 31.4572
- type: nauc_mrr_at_1000_diff1
value: 70.6705
- type: main_score
value: 71.384
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (go)
type: code-search-net/code_search_net
config: go
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 71.39999999999999
- type: ndcg_at_3
value: 82.32000000000001
- type: ndcg_at_5
value: 84.22699999999999
- type: ndcg_at_10
value: 84.922
- type: ndcg_at_20
value: 85.226
- type: ndcg_at_100
value: 85.563
- type: ndcg_at_1000
value: 85.66
- type: map_at_1
value: 71.39999999999999
- type: map_at_3
value: 79.783
- type: map_at_5
value: 80.848
- type: map_at_10
value: 81.145
- type: map_at_20
value: 81.229
- type: map_at_100
value: 81.284
- type: map_at_1000
value: 81.286
- type: recall_at_1
value: 71.39999999999999
- type: recall_at_3
value: 89.60000000000001
- type: recall_at_5
value: 94.19999999999999
- type: recall_at_10
value: 96.3
- type: recall_at_20
value: 97.5
- type: recall_at_100
value: 99.2
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 71.39999999999999
- type: precision_at_3
value: 29.866999999999997
- type: precision_at_5
value: 18.84
- type: precision_at_10
value: 9.629999999999999
- type: precision_at_20
value: 4.875
- type: precision_at_100
value: 0.992
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 71.39999999999999
- type: mrr_at_3
value: 79.7833
- type: mrr_at_5
value: 80.8483
- type: mrr_at_10
value: 81.14489999999999
- type: mrr_at_20
value: 81.22890000000001
- type: mrr_at_100
value: 81.2836
- type: mrr_at_1000
value: 81.28649999999999
- type: nauc_ndcg_at_1_max
value: 46.2744
- type: nauc_ndcg_at_1_std
value: -2.9863
- type: nauc_ndcg_at_1_diff1
value: 74.0857
- type: nauc_ndcg_at_3_max
value: 54.4012
- type: nauc_ndcg_at_3_std
value: -3.3299000000000003
- type: nauc_ndcg_at_3_diff1
value: 70.891
- type: nauc_ndcg_at_5_max
value: 54.3223
- type: nauc_ndcg_at_5_std
value: -1.6239
- type: nauc_ndcg_at_5_diff1
value: 71.7397
- type: nauc_ndcg_at_10_max
value: 53.629099999999994
- type: nauc_ndcg_at_10_std
value: -1.8041999999999998
- type: nauc_ndcg_at_10_diff1
value: 72.8108
- type: nauc_ndcg_at_20_max
value: 52.8247
- type: nauc_ndcg_at_20_std
value: -2.6823
- type: nauc_ndcg_at_20_diff1
value: 72.7573
- type: nauc_ndcg_at_100_max
value: 52.359
- type: nauc_ndcg_at_100_std
value: -2.8805
- type: nauc_ndcg_at_100_diff1
value: 72.8282
- type: nauc_ndcg_at_1000_max
value: 52.1323
- type: nauc_ndcg_at_1000_std
value: -2.8353
- type: nauc_ndcg_at_1000_diff1
value: 72.6771
- type: nauc_map_at_1_max
value: 46.2744
- type: nauc_map_at_1_std
value: -2.9863
- type: nauc_map_at_1_diff1
value: 74.0857
- type: nauc_map_at_3_max
value: 52.0957
- type: nauc_map_at_3_std
value: -3.5077999999999996
- type: nauc_map_at_3_diff1
value: 71.90530000000001
- type: nauc_map_at_5_max
value: 51.9209
- type: nauc_map_at_5_std
value: -2.7184
- type: nauc_map_at_5_diff1
value: 72.3474
- type: nauc_map_at_10_max
value: 51.642900000000004
- type: nauc_map_at_10_std
value: -2.8069
- type: nauc_map_at_10_diff1
value: 72.74589999999999
- type: nauc_map_at_20_max
value: 51.451800000000006
- type: nauc_map_at_20_std
value: -2.9922
- type: nauc_map_at_20_diff1
value: 72.7222
- type: nauc_map_at_100_max
value: 51.3795
- type: nauc_map_at_100_std
value: -3.0112
- type: nauc_map_at_100_diff1
value: 72.723
- type: nauc_map_at_1000_max
value: 51.3724
- type: nauc_map_at_1000_std
value: -3.009
- type: nauc_map_at_1000_diff1
value: 72.7192
- type: nauc_recall_at_1_max
value: 46.2744
- type: nauc_recall_at_1_std
value: -2.9863
- type: nauc_recall_at_1_diff1
value: 74.0857
- type: nauc_recall_at_3_max
value: 65.8657
- type: nauc_recall_at_3_std
value: -2.2125
- type: nauc_recall_at_3_diff1
value: 65.75649999999999
- type: nauc_recall_at_5_max
value: 74.348
- type: nauc_recall_at_5_std
value: 8.7503
- type: nauc_recall_at_5_diff1
value: 66.9693
- type: nauc_recall_at_10_max
value: 77.9494
- type: nauc_recall_at_10_std
value: 12.8688
- type: nauc_recall_at_10_diff1
value: 75.7287
- type: nauc_recall_at_20_max
value: 72.9655
- type: nauc_recall_at_20_std
value: 0.8702
- type: nauc_recall_at_20_diff1
value: 76.5864
- type: nauc_recall_at_100_max
value: 80.4563
- type: nauc_recall_at_100_std
value: -9.278699999999999
- type: nauc_recall_at_100_diff1
value: 92.793
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 46.2744
- type: nauc_precision_at_1_std
value: -2.9863
- type: nauc_precision_at_1_diff1
value: 74.0857
- type: nauc_precision_at_3_max
value: 65.8657
- type: nauc_precision_at_3_std
value: -2.2125
- type: nauc_precision_at_3_diff1
value: 65.75649999999999
- type: nauc_precision_at_5_max
value: 74.348
- type: nauc_precision_at_5_std
value: 8.7503
- type: nauc_precision_at_5_diff1
value: 66.9693
- type: nauc_precision_at_10_max
value: 77.9494
- type: nauc_precision_at_10_std
value: 12.8688
- type: nauc_precision_at_10_diff1
value: 75.7287
- type: nauc_precision_at_20_max
value: 72.9655
- type: nauc_precision_at_20_std
value: 0.8702
- type: nauc_precision_at_20_diff1
value: 76.5864
- type: nauc_precision_at_100_max
value: 80.4563
- type: nauc_precision_at_100_std
value: -9.278699999999999
- type: nauc_precision_at_100_diff1
value: 92.793
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 46.2744
- type: nauc_mrr_at_1_std
value: -2.9863
- type: nauc_mrr_at_1_diff1
value: 74.0857
- type: nauc_mrr_at_3_max
value: 52.0957
- type: nauc_mrr_at_3_std
value: -3.5077999999999996
- type: nauc_mrr_at_3_diff1
value: 71.90530000000001
- type: nauc_mrr_at_5_max
value: 51.9209
- type: nauc_mrr_at_5_std
value: -2.7184
- type: nauc_mrr_at_5_diff1
value: 72.3474
- type: nauc_mrr_at_10_max
value: 51.642900000000004
- type: nauc_mrr_at_10_std
value: -2.8069
- type: nauc_mrr_at_10_diff1
value: 72.74589999999999
- type: nauc_mrr_at_20_max
value: 51.451800000000006
- type: nauc_mrr_at_20_std
value: -2.9922
- type: nauc_mrr_at_20_diff1
value: 72.7222
- type: nauc_mrr_at_100_max
value: 51.3795
- type: nauc_mrr_at_100_std
value: -3.0112
- type: nauc_mrr_at_100_diff1
value: 72.723
- type: nauc_mrr_at_1000_max
value: 51.3724
- type: nauc_mrr_at_1000_std
value: -3.009
- type: nauc_mrr_at_1000_diff1
value: 72.7192
- type: main_score
value: 84.922
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (ruby)
type: code-search-net/code_search_net
config: ruby
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 61.9
- type: ndcg_at_3
value: 71.91
- type: ndcg_at_5
value: 74.11
- type: ndcg_at_10
value: 75.274
- type: ndcg_at_20
value: 75.97
- type: ndcg_at_100
value: 77.021
- type: ndcg_at_1000
value: 77.511
- type: map_at_1
value: 61.9
- type: map_at_3
value: 69.55
- type: map_at_5
value: 70.78
- type: map_at_10
value: 71.26
- type: map_at_20
value: 71.45899999999999
- type: map_at_100
value: 71.609
- type: map_at_1000
value: 71.624
- type: recall_at_1
value: 61.9
- type: recall_at_3
value: 78.7
- type: recall_at_5
value: 84.0
- type: recall_at_10
value: 87.6
- type: recall_at_20
value: 90.3
- type: recall_at_100
value: 95.89999999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 61.9
- type: precision_at_3
value: 26.233
- type: precision_at_5
value: 16.8
- type: precision_at_10
value: 8.76
- type: precision_at_20
value: 4.515000000000001
- type: precision_at_100
value: 0.959
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 61.9
- type: mrr_at_3
value: 69.55
- type: mrr_at_5
value: 70.78
- type: mrr_at_10
value: 71.2604
- type: mrr_at_20
value: 71.4589
- type: mrr_at_100
value: 71.609
- type: mrr_at_1000
value: 71.6242
- type: nauc_ndcg_at_1_max
value: 51.8333
- type: nauc_ndcg_at_1_std
value: 8.4163
- type: nauc_ndcg_at_1_diff1
value: 72.37700000000001
- type: nauc_ndcg_at_3_max
value: 56.0395
- type: nauc_ndcg_at_3_std
value: 12.583
- type: nauc_ndcg_at_3_diff1
value: 67.5758
- type: nauc_ndcg_at_5_max
value: 56.35289999999999
- type: nauc_ndcg_at_5_std
value: 13.9102
- type: nauc_ndcg_at_5_diff1
value: 68.36179999999999
- type: nauc_ndcg_at_10_max
value: 55.954499999999996
- type: nauc_ndcg_at_10_std
value: 14.8003
- type: nauc_ndcg_at_10_diff1
value: 68.3755
- type: nauc_ndcg_at_20_max
value: 56.2808
- type: nauc_ndcg_at_20_std
value: 16.0875
- type: nauc_ndcg_at_20_diff1
value: 68.3962
- type: nauc_ndcg_at_100_max
value: 56.3164
- type: nauc_ndcg_at_100_std
value: 15.8916
- type: nauc_ndcg_at_100_diff1
value: 69.00699999999999
- type: nauc_ndcg_at_1000_max
value: 55.785700000000006
- type: nauc_ndcg_at_1000_std
value: 14.3348
- type: nauc_ndcg_at_1000_diff1
value: 69.0698
- type: nauc_map_at_1_max
value: 51.8333
- type: nauc_map_at_1_std
value: 8.4163
- type: nauc_map_at_1_diff1
value: 72.37700000000001
- type: nauc_map_at_3_max
value: 54.942800000000005
- type: nauc_map_at_3_std
value: 11.2973
- type: nauc_map_at_3_diff1
value: 68.9311
- type: nauc_map_at_5_max
value: 55.0587
- type: nauc_map_at_5_std
value: 11.9547
- type: nauc_map_at_5_diff1
value: 69.3713
- type: nauc_map_at_10_max
value: 54.9098
- type: nauc_map_at_10_std
value: 12.2453
- type: nauc_map_at_10_diff1
value: 69.3958
- type: nauc_map_at_20_max
value: 54.9689
- type: nauc_map_at_20_std
value: 12.524799999999999
- type: nauc_map_at_20_diff1
value: 69.4109
- type: nauc_map_at_100_max
value: 54.9906
- type: nauc_map_at_100_std
value: 12.500300000000001
- type: nauc_map_at_100_diff1
value: 69.50319999999999
- type: nauc_map_at_1000_max
value: 54.97840000000001
- type: nauc_map_at_1000_std
value: 12.4639
- type: nauc_map_at_1000_diff1
value: 69.50460000000001
- type: nauc_recall_at_1_max
value: 51.8333
- type: nauc_recall_at_1_std
value: 8.4163
- type: nauc_recall_at_1_diff1
value: 72.37700000000001
- type: nauc_recall_at_3_max
value: 60.100699999999996
- type: nauc_recall_at_3_std
value: 17.4623
- type: nauc_recall_at_3_diff1
value: 62.495599999999996
- type: nauc_recall_at_5_max
value: 62.3622
- type: nauc_recall_at_5_std
value: 23.282700000000002
- type: nauc_recall_at_5_diff1
value: 63.8786
- type: nauc_recall_at_10_max
value: 61.567899999999995
- type: nauc_recall_at_10_std
value: 30.543300000000002
- type: nauc_recall_at_10_diff1
value: 62.765800000000006
- type: nauc_recall_at_20_max
value: 65.8648
- type: nauc_recall_at_20_std
value: 45.2891
- type: nauc_recall_at_20_diff1
value: 61.5048
- type: nauc_recall_at_100_max
value: 77.73790000000001
- type: nauc_recall_at_100_std
value: 78.3004
- type: nauc_recall_at_100_diff1
value: 66.54820000000001
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 51.8333
- type: nauc_precision_at_1_std
value: 8.4163
- type: nauc_precision_at_1_diff1
value: 72.37700000000001
- type: nauc_precision_at_3_max
value: 60.100699999999996
- type: nauc_precision_at_3_std
value: 17.4623
- type: nauc_precision_at_3_diff1
value: 62.495599999999996
- type: nauc_precision_at_5_max
value: 62.3622
- type: nauc_precision_at_5_std
value: 23.282700000000002
- type: nauc_precision_at_5_diff1
value: 63.8786
- type: nauc_precision_at_10_max
value: 61.567899999999995
- type: nauc_precision_at_10_std
value: 30.543300000000002
- type: nauc_precision_at_10_diff1
value: 62.765800000000006
- type: nauc_precision_at_20_max
value: 65.8648
- type: nauc_precision_at_20_std
value: 45.2891
- type: nauc_precision_at_20_diff1
value: 61.5048
- type: nauc_precision_at_100_max
value: 77.73790000000001
- type: nauc_precision_at_100_std
value: 78.3004
- type: nauc_precision_at_100_diff1
value: 66.54820000000001
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 51.8333
- type: nauc_mrr_at_1_std
value: 8.4163
- type: nauc_mrr_at_1_diff1
value: 72.37700000000001
- type: nauc_mrr_at_3_max
value: 54.942800000000005
- type: nauc_mrr_at_3_std
value: 11.2973
- type: nauc_mrr_at_3_diff1
value: 68.9311
- type: nauc_mrr_at_5_max
value: 55.0587
- type: nauc_mrr_at_5_std
value: 11.9547
- type: nauc_mrr_at_5_diff1
value: 69.3713
- type: nauc_mrr_at_10_max
value: 54.9098
- type: nauc_mrr_at_10_std
value: 12.2453
- type: nauc_mrr_at_10_diff1
value: 69.3958
- type: nauc_mrr_at_20_max
value: 54.9689
- type: nauc_mrr_at_20_std
value: 12.524799999999999
- type: nauc_mrr_at_20_diff1
value: 69.4109
- type: nauc_mrr_at_100_max
value: 54.9906
- type: nauc_mrr_at_100_std
value: 12.500300000000001
- type: nauc_mrr_at_100_diff1
value: 69.50319999999999
- type: nauc_mrr_at_1000_max
value: 54.97840000000001
- type: nauc_mrr_at_1000_std
value: 12.4639
- type: nauc_mrr_at_1000_diff1
value: 69.50460000000001
- type: main_score
value: 75.274
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (java)
type: code-search-net/code_search_net
config: java
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 52.6
- type: ndcg_at_3
value: 64.044
- type: ndcg_at_5
value: 67.202
- type: ndcg_at_10
value: 69.447
- type: ndcg_at_20
value: 70.488
- type: ndcg_at_100
value: 71.481
- type: ndcg_at_1000
value: 71.995
- type: map_at_1
value: 52.6
- type: map_at_3
value: 61.317
- type: map_at_5
value: 63.062
- type: map_at_10
value: 64.01400000000001
- type: map_at_20
value: 64.302
- type: map_at_100
value: 64.443
- type: map_at_1000
value: 64.459
- type: recall_at_1
value: 52.6
- type: recall_at_3
value: 71.89999999999999
- type: recall_at_5
value: 79.60000000000001
- type: recall_at_10
value: 86.4
- type: recall_at_20
value: 90.5
- type: recall_at_100
value: 95.8
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 52.6
- type: precision_at_3
value: 23.967
- type: precision_at_5
value: 15.920000000000002
- type: precision_at_10
value: 8.64
- type: precision_at_20
value: 4.5249999999999995
- type: precision_at_100
value: 0.958
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 52.6
- type: mrr_at_3
value: 61.316700000000004
- type: mrr_at_5
value: 63.0617
- type: mrr_at_10
value: 64.01400000000001
- type: mrr_at_20
value: 64.3022
- type: mrr_at_100
value: 64.443
- type: mrr_at_1000
value: 64.4595
- type: nauc_ndcg_at_1_max
value: 38.4317
- type: nauc_ndcg_at_1_std
value: -18.9677
- type: nauc_ndcg_at_1_diff1
value: 62.74570000000001
- type: nauc_ndcg_at_3_max
value: 43.612
- type: nauc_ndcg_at_3_std
value: -14.6587
- type: nauc_ndcg_at_3_diff1
value: 56.92230000000001
- type: nauc_ndcg_at_5_max
value: 44.840999999999994
- type: nauc_ndcg_at_5_std
value: -12.328600000000002
- type: nauc_ndcg_at_5_diff1
value: 56.998000000000005
- type: nauc_ndcg_at_10_max
value: 45.5768
- type: nauc_ndcg_at_10_std
value: -10.871
- type: nauc_ndcg_at_10_diff1
value: 57.36130000000001
- type: nauc_ndcg_at_20_max
value: 45.1125
- type: nauc_ndcg_at_20_std
value: -10.575
- type: nauc_ndcg_at_20_diff1
value: 57.2132
- type: nauc_ndcg_at_100_max
value: 45.4087
- type: nauc_ndcg_at_100_std
value: -10.356300000000001
- type: nauc_ndcg_at_100_diff1
value: 57.607
- type: nauc_ndcg_at_1000_max
value: 44.2686
- type: nauc_ndcg_at_1000_std
value: -12.2661
- type: nauc_ndcg_at_1000_diff1
value: 58.0082
- type: nauc_map_at_1_max
value: 38.4317
- type: nauc_map_at_1_std
value: -18.9677
- type: nauc_map_at_1_diff1
value: 62.74570000000001
- type: nauc_map_at_3_max
value: 42.278
- type: nauc_map_at_3_std
value: -15.937499999999998
- type: nauc_map_at_3_diff1
value: 58.4671
- type: nauc_map_at_5_max
value: 42.8414
- type: nauc_map_at_5_std
value: -14.7742
- type: nauc_map_at_5_diff1
value: 58.582100000000004
- type: nauc_map_at_10_max
value: 43.0236
- type: nauc_map_at_10_std
value: -14.3595
- type: nauc_map_at_10_diff1
value: 58.765100000000004
- type: nauc_map_at_20_max
value: 42.8918
- type: nauc_map_at_20_std
value: -14.335500000000001
- type: nauc_map_at_20_diff1
value: 58.746500000000005
- type: nauc_map_at_100_max
value: 42.9383
- type: nauc_map_at_100_std
value: -14.296600000000002
- type: nauc_map_at_100_diff1
value: 58.796099999999996
- type: nauc_map_at_1000_max
value: 42.9079
- type: nauc_map_at_1000_std
value: -14.3452
- type: nauc_map_at_1000_diff1
value: 58.8048
- type: nauc_recall_at_1_max
value: 38.4317
- type: nauc_recall_at_1_std
value: -18.9677
- type: nauc_recall_at_1_diff1
value: 62.74570000000001
- type: nauc_recall_at_3_max
value: 48.255199999999995
- type: nauc_recall_at_3_std
value: -10.116999999999999
- type: nauc_recall_at_3_diff1
value: 51.5211
- type: nauc_recall_at_5_max
value: 53.7581
- type: nauc_recall_at_5_std
value: -1.1828
- type: nauc_recall_at_5_diff1
value: 50.139199999999995
- type: nauc_recall_at_10_max
value: 62.2138
- type: nauc_recall_at_10_std
value: 12.5761
- type: nauc_recall_at_10_diff1
value: 49.091499999999996
- type: nauc_recall_at_20_max
value: 64.05619999999999
- type: nauc_recall_at_20_std
value: 24.6892
- type: nauc_recall_at_20_diff1
value: 44.4292
- type: nauc_recall_at_100_max
value: 94.1543
- type: nauc_recall_at_100_std
value: 72.2889
- type: nauc_recall_at_100_diff1
value: 39.8115
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 38.4317
- type: nauc_precision_at_1_std
value: -18.9677
- type: nauc_precision_at_1_diff1
value: 62.74570000000001
- type: nauc_precision_at_3_max
value: 48.255199999999995
- type: nauc_precision_at_3_std
value: -10.116999999999999
- type: nauc_precision_at_3_diff1
value: 51.5211
- type: nauc_precision_at_5_max
value: 53.7581
- type: nauc_precision_at_5_std
value: -1.1828
- type: nauc_precision_at_5_diff1
value: 50.139199999999995
- type: nauc_precision_at_10_max
value: 62.2138
- type: nauc_precision_at_10_std
value: 12.5761
- type: nauc_precision_at_10_diff1
value: 49.091499999999996
- type: nauc_precision_at_20_max
value: 64.05619999999999
- type: nauc_precision_at_20_std
value: 24.6892
- type: nauc_precision_at_20_diff1
value: 44.4292
- type: nauc_precision_at_100_max
value: 94.1543
- type: nauc_precision_at_100_std
value: 72.2889
- type: nauc_precision_at_100_diff1
value: 39.8115
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 38.4317
- type: nauc_mrr_at_1_std
value: -18.9677
- type: nauc_mrr_at_1_diff1
value: 62.74570000000001
- type: nauc_mrr_at_3_max
value: 42.278
- type: nauc_mrr_at_3_std
value: -15.937499999999998
- type: nauc_mrr_at_3_diff1
value: 58.4671
- type: nauc_mrr_at_5_max
value: 42.8414
- type: nauc_mrr_at_5_std
value: -14.7742
- type: nauc_mrr_at_5_diff1
value: 58.582100000000004
- type: nauc_mrr_at_10_max
value: 43.0236
- type: nauc_mrr_at_10_std
value: -14.3595
- type: nauc_mrr_at_10_diff1
value: 58.765100000000004
- type: nauc_mrr_at_20_max
value: 42.8918
- type: nauc_mrr_at_20_std
value: -14.335500000000001
- type: nauc_mrr_at_20_diff1
value: 58.746500000000005
- type: nauc_mrr_at_100_max
value: 42.9383
- type: nauc_mrr_at_100_std
value: -14.296600000000002
- type: nauc_mrr_at_100_diff1
value: 58.796099999999996
- type: nauc_mrr_at_1000_max
value: 42.9079
- type: nauc_mrr_at_1000_std
value: -14.3452
- type: nauc_mrr_at_1000_diff1
value: 58.8048
- type: main_score
value: 69.447
- task:
type: Retrieval
dataset:
name: MTEB CodeSearchNetRetrieval (php)
type: code-search-net/code_search_net
config: php
split: test
revision: fdc6a9e39575768c27eb8a2a5f702bf846eb4759
metrics:
- type: ndcg_at_1
value: 57.699999999999996
- type: ndcg_at_3
value: 69.071
- type: ndcg_at_5
value: 71.331
- type: ndcg_at_10
value: 73.455
- type: ndcg_at_20
value: 74.298
- type: ndcg_at_100
value: 74.842
- type: ndcg_at_1000
value: 75.411
- type: map_at_1
value: 57.699999999999996
- type: map_at_3
value: 66.233
- type: map_at_5
value: 67.508
- type: map_at_10
value: 68.398
- type: map_at_20
value: 68.634
- type: map_at_100
value: 68.718
- type: map_at_1000
value: 68.735
- type: recall_at_1
value: 57.699999999999996
- type: recall_at_3
value: 77.3
- type: recall_at_5
value: 82.69999999999999
- type: recall_at_10
value: 89.2
- type: recall_at_20
value: 92.5
- type: recall_at_100
value: 95.3
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 57.699999999999996
- type: precision_at_3
value: 25.767
- type: precision_at_5
value: 16.54
- type: precision_at_10
value: 8.92
- type: precision_at_20
value: 4.625
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 57.699999999999996
- type: mrr_at_3
value: 66.2333
- type: mrr_at_5
value: 67.5083
- type: mrr_at_10
value: 68.398
- type: mrr_at_20
value: 68.6345
- type: mrr_at_100
value: 68.71770000000001
- type: mrr_at_1000
value: 68.7351
- type: nauc_ndcg_at_1_max
value: 47.0017
- type: nauc_ndcg_at_1_std
value: 7.702000000000001
- type: nauc_ndcg_at_1_diff1
value: 65.5265
- type: nauc_ndcg_at_3_max
value: 53.1223
- type: nauc_ndcg_at_3_std
value: 14.5277
- type: nauc_ndcg_at_3_diff1
value: 60.5267
- type: nauc_ndcg_at_5_max
value: 55.99570000000001
- type: nauc_ndcg_at_5_std
value: 17.467
- type: nauc_ndcg_at_5_diff1
value: 63.1188
- type: nauc_ndcg_at_10_max
value: 55.7826
- type: nauc_ndcg_at_10_std
value: 19.1279
- type: nauc_ndcg_at_10_diff1
value: 63.463
- type: nauc_ndcg_at_20_max
value: 55.2338
- type: nauc_ndcg_at_20_std
value: 19.5684
- type: nauc_ndcg_at_20_diff1
value: 63.7312
- type: nauc_ndcg_at_100_max
value: 54.898199999999996
- type: nauc_ndcg_at_100_std
value: 19.1172
- type: nauc_ndcg_at_100_diff1
value: 63.7935
- type: nauc_ndcg_at_1000_max
value: 53.9486
- type: nauc_ndcg_at_1000_std
value: 17.0841
- type: nauc_ndcg_at_1000_diff1
value: 63.5189
- type: nauc_map_at_1_max
value: 47.0017
- type: nauc_map_at_1_std
value: 7.702000000000001
- type: nauc_map_at_1_diff1
value: 65.5265
- type: nauc_map_at_3_max
value: 51.3811
- type: nauc_map_at_3_std
value: 12.6201
- type: nauc_map_at_3_diff1
value: 61.781299999999995
- type: nauc_map_at_5_max
value: 52.788599999999995
- type: nauc_map_at_5_std
value: 13.9926
- type: nauc_map_at_5_diff1
value: 63.155300000000004
- type: nauc_map_at_10_max
value: 52.630900000000004
- type: nauc_map_at_10_std
value: 14.5419
- type: nauc_map_at_10_diff1
value: 63.299499999999995
- type: nauc_map_at_20_max
value: 52.4779
- type: nauc_map_at_20_std
value: 14.615300000000001
- type: nauc_map_at_20_diff1
value: 63.360099999999996
- type: nauc_map_at_100_max
value: 52.434999999999995
- type: nauc_map_at_100_std
value: 14.5613
- type: nauc_map_at_100_diff1
value: 63.362700000000004
- type: nauc_map_at_1000_max
value: 52.412000000000006
- type: nauc_map_at_1000_std
value: 14.5121
- type: nauc_map_at_1000_diff1
value: 63.361000000000004
- type: nauc_recall_at_1_max
value: 47.0017
- type: nauc_recall_at_1_std
value: 7.702000000000001
- type: nauc_recall_at_1_diff1
value: 65.5265
- type: nauc_recall_at_3_max
value: 59.7842
- type: nauc_recall_at_3_std
value: 21.8077
- type: nauc_recall_at_3_diff1
value: 55.81850000000001
- type: nauc_recall_at_5_max
value: 71.5097
- type: nauc_recall_at_5_std
value: 34.341899999999995
- type: nauc_recall_at_5_diff1
value: 63.604000000000006
- type: nauc_recall_at_10_max
value: 78.1568
- type: nauc_recall_at_10_std
value: 53.016600000000004
- type: nauc_recall_at_10_diff1
value: 65.779
- type: nauc_recall_at_20_max
value: 81.5145
- type: nauc_recall_at_20_std
value: 72.038
- type: nauc_recall_at_20_diff1
value: 69.7603
- type: nauc_recall_at_100_max
value: 89.0587
- type: nauc_recall_at_100_std
value: 91.89070000000001
- type: nauc_recall_at_100_diff1
value: 75.1088
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 47.0017
- type: nauc_precision_at_1_std
value: 7.702000000000001
- type: nauc_precision_at_1_diff1
value: 65.5265
- type: nauc_precision_at_3_max
value: 59.7842
- type: nauc_precision_at_3_std
value: 21.8077
- type: nauc_precision_at_3_diff1
value: 55.81850000000001
- type: nauc_precision_at_5_max
value: 71.5097
- type: nauc_precision_at_5_std
value: 34.341899999999995
- type: nauc_precision_at_5_diff1
value: 63.604000000000006
- type: nauc_precision_at_10_max
value: 78.1568
- type: nauc_precision_at_10_std
value: 53.016600000000004
- type: nauc_precision_at_10_diff1
value: 65.779
- type: nauc_precision_at_20_max
value: 81.5145
- type: nauc_precision_at_20_std
value: 72.038
- type: nauc_precision_at_20_diff1
value: 69.7603
- type: nauc_precision_at_100_max
value: 89.0587
- type: nauc_precision_at_100_std
value: 91.89070000000001
- type: nauc_precision_at_100_diff1
value: 75.1088
- type: nauc_precision_at_1000_max
value: .nan
- type: nauc_precision_at_1000_std
value: .nan
- type: nauc_precision_at_1000_diff1
value: .nan
- type: nauc_mrr_at_1_max
value: 47.0017
- type: nauc_mrr_at_1_std
value: 7.702000000000001
- type: nauc_mrr_at_1_diff1
value: 65.5265
- type: nauc_mrr_at_3_max
value: 51.3811
- type: nauc_mrr_at_3_std
value: 12.6201
- type: nauc_mrr_at_3_diff1
value: 61.781299999999995
- type: nauc_mrr_at_5_max
value: 52.788599999999995
- type: nauc_mrr_at_5_std
value: 13.9926
- type: nauc_mrr_at_5_diff1
value: 63.155300000000004
- type: nauc_mrr_at_10_max
value: 52.630900000000004
- type: nauc_mrr_at_10_std
value: 14.5419
- type: nauc_mrr_at_10_diff1
value: 63.299499999999995
- type: nauc_mrr_at_20_max
value: 52.4779
- type: nauc_mrr_at_20_std
value: 14.615300000000001
- type: nauc_mrr_at_20_diff1
value: 63.360099999999996
- type: nauc_mrr_at_100_max
value: 52.434999999999995
- type: nauc_mrr_at_100_std
value: 14.5613
- type: nauc_mrr_at_100_diff1
value: 63.362700000000004
- type: nauc_mrr_at_1000_max
value: 52.412000000000006
- type: nauc_mrr_at_1000_std
value: 14.5121
- type: nauc_mrr_at_1000_diff1
value: 63.361000000000004
- type: main_score
value: 73.455
- task:
type: Retrieval
dataset:
name: MTEB CodeTransOceanContest (default)
type: CoIR-Retrieval/codetrans-contest
config: default
split: test
revision: 20da4eb20a4b17300c0986ee148c90867a7f2a4d
metrics:
- type: ndcg_at_1
value: 46.154
- type: ndcg_at_3
value: 52.019999999999996
- type: ndcg_at_5
value: 53.929
- type: ndcg_at_10
value: 57.475
- type: ndcg_at_20
value: 59.861
- type: ndcg_at_100
value: 61.577000000000005
- type: ndcg_at_1000
value: 62.755
- type: map_at_1
value: 46.154
- type: map_at_3
value: 50.602999999999994
- type: map_at_5
value: 51.68899999999999
- type: map_at_10
value: 53.174
- type: map_at_20
value: 53.818
- type: map_at_100
value: 54.041
- type: map_at_1000
value: 54.081
- type: recall_at_1
value: 46.154
- type: recall_at_3
value: 56.108999999999995
- type: recall_at_5
value: 60.633
- type: recall_at_10
value: 71.493
- type: recall_at_20
value: 80.99499999999999
- type: recall_at_100
value: 90.498
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 46.154
- type: precision_at_3
value: 18.703
- type: precision_at_5
value: 12.127
- type: precision_at_10
value: 7.149
- type: precision_at_20
value: 4.05
- type: precision_at_100
value: 0.905
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 46.153800000000004
- type: mrr_at_3
value: 50.6033
- type: mrr_at_5
value: 51.6893
- type: mrr_at_10
value: 53.173899999999996
- type: mrr_at_20
value: 53.8181
- type: mrr_at_100
value: 54.0405
- type: mrr_at_1000
value: 54.081199999999995
- type: nauc_ndcg_at_1_max
value: 59.032
- type: nauc_ndcg_at_1_std
value: 8.2815
- type: nauc_ndcg_at_1_diff1
value: 80.5428
- type: nauc_ndcg_at_3_max
value: 55.47410000000001
- type: nauc_ndcg_at_3_std
value: 4.4284
- type: nauc_ndcg_at_3_diff1
value: 77.2405
- type: nauc_ndcg_at_5_max
value: 54.6337
- type: nauc_ndcg_at_5_std
value: 5.3048
- type: nauc_ndcg_at_5_diff1
value: 76.5969
- type: nauc_ndcg_at_10_max
value: 51.8584
- type: nauc_ndcg_at_10_std
value: 3.5628
- type: nauc_ndcg_at_10_diff1
value: 74.6966
- type: nauc_ndcg_at_20_max
value: 54.3478
- type: nauc_ndcg_at_20_std
value: 4.3697
- type: nauc_ndcg_at_20_diff1
value: 75.6032
- type: nauc_ndcg_at_100_max
value: 55.488400000000006
- type: nauc_ndcg_at_100_std
value: 6.101
- type: nauc_ndcg_at_100_diff1
value: 76.0249
- type: nauc_ndcg_at_1000_max
value: 55.1091
- type: nauc_ndcg_at_1000_std
value: 5.5951
- type: nauc_ndcg_at_1000_diff1
value: 76.3907
- type: nauc_map_at_1_max
value: 59.032
- type: nauc_map_at_1_std
value: 8.2815
- type: nauc_map_at_1_diff1
value: 80.5428
- type: nauc_map_at_3_max
value: 56.261700000000005
- type: nauc_map_at_3_std
value: 5.3123
- type: nauc_map_at_3_diff1
value: 77.823
- type: nauc_map_at_5_max
value: 55.7926
- type: nauc_map_at_5_std
value: 5.8055
- type: nauc_map_at_5_diff1
value: 77.4779
- type: nauc_map_at_10_max
value: 54.77459999999999
- type: nauc_map_at_10_std
value: 5.1733
- type: nauc_map_at_10_diff1
value: 76.79249999999999
- type: nauc_map_at_20_max
value: 55.4426
- type: nauc_map_at_20_std
value: 5.4346
- type: nauc_map_at_20_diff1
value: 77.0378
- type: nauc_map_at_100_max
value: 55.6049
- type: nauc_map_at_100_std
value: 5.7131
- type: nauc_map_at_100_diff1
value: 77.0756
- type: nauc_map_at_1000_max
value: 55.5915
- type: nauc_map_at_1000_std
value: 5.7007
- type: nauc_map_at_1000_diff1
value: 77.0939
- type: nauc_recall_at_1_max
value: 59.032
- type: nauc_recall_at_1_std
value: 8.2815
- type: nauc_recall_at_1_diff1
value: 80.5428
- type: nauc_recall_at_3_max
value: 53.1398
- type: nauc_recall_at_3_std
value: 1.7934999999999999
- type: nauc_recall_at_3_diff1
value: 75.5862
- type: nauc_recall_at_5_max
value: 50.9304
- type: nauc_recall_at_5_std
value: 3.8924
- type: nauc_recall_at_5_diff1
value: 73.8369
- type: nauc_recall_at_10_max
value: 38.9905
- type: nauc_recall_at_10_std
value: -3.4564999999999997
- type: nauc_recall_at_10_diff1
value: 65.5567
- type: nauc_recall_at_20_max
value: 50.0429
- type: nauc_recall_at_20_std
value: -1.4551
- type: nauc_recall_at_20_diff1
value: 67.9871
- type: nauc_recall_at_100_max
value: 63.44030000000001
- type: nauc_recall_at_100_std
value: 17.8876
- type: nauc_recall_at_100_diff1
value: 68.9388
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: 59.032
- type: nauc_precision_at_1_std
value: 8.2815
- type: nauc_precision_at_1_diff1
value: 80.5428
- type: nauc_precision_at_3_max
value: 53.1398
- type: nauc_precision_at_3_std
value: 1.7934999999999999
- type: nauc_precision_at_3_diff1
value: 75.5862
- type: nauc_precision_at_5_max
value: 50.9304
- type: nauc_precision_at_5_std
value: 3.8924
- type: nauc_precision_at_5_diff1
value: 73.8369
- type: nauc_precision_at_10_max
value: 38.9905
- type: nauc_precision_at_10_std
value: -3.4564999999999997
- type: nauc_precision_at_10_diff1
value: 65.5567
- type: nauc_precision_at_20_max
value: 50.0429
- type: nauc_precision_at_20_std
value: -1.4551
- type: nauc_precision_at_20_diff1
value: 67.9871
- type: nauc_precision_at_100_max
value: 63.44030000000001
- type: nauc_precision_at_100_std
value: 17.8876
- type: nauc_precision_at_100_diff1
value: 68.9388
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: 59.032
- type: nauc_mrr_at_1_std
value: 8.2815
- type: nauc_mrr_at_1_diff1
value: 80.5428
- type: nauc_mrr_at_3_max
value: 56.261700000000005
- type: nauc_mrr_at_3_std
value: 5.3123
- type: nauc_mrr_at_3_diff1
value: 77.823
- type: nauc_mrr_at_5_max
value: 55.7926
- type: nauc_mrr_at_5_std
value: 5.8055
- type: nauc_mrr_at_5_diff1
value: 77.4779
- type: nauc_mrr_at_10_max
value: 54.77459999999999
- type: nauc_mrr_at_10_std
value: 5.1733
- type: nauc_mrr_at_10_diff1
value: 76.79249999999999
- type: nauc_mrr_at_20_max
value: 55.4426
- type: nauc_mrr_at_20_std
value: 5.4346
- type: nauc_mrr_at_20_diff1
value: 77.0378
- type: nauc_mrr_at_100_max
value: 55.6049
- type: nauc_mrr_at_100_std
value: 5.7131
- type: nauc_mrr_at_100_diff1
value: 77.0756
- type: nauc_mrr_at_1000_max
value: 55.5915
- type: nauc_mrr_at_1000_std
value: 5.7007
- type: nauc_mrr_at_1000_diff1
value: 77.0939
- type: main_score
value: 57.475
- task:
type: Retrieval
dataset:
name: MTEB CodeTransOceanDL (default)
type: CoIR-Retrieval/codetrans-dl
config: default
split: test
revision: 281562cb8a1265ab5c0824bfa6ddcd9b0a15618f
metrics:
- type: ndcg_at_1
value: 8.889
- type: ndcg_at_3
value: 10.700999999999999
- type: ndcg_at_5
value: 16.082
- type: ndcg_at_10
value: 26.888
- type: ndcg_at_20
value: 35.608000000000004
- type: ndcg_at_100
value: 36.459
- type: ndcg_at_1000
value: 36.775999999999996
- type: map_at_1
value: 8.889
- type: map_at_3
value: 10.184999999999999
- type: map_at_5
value: 13.241
- type: map_at_10
value: 17.502000000000002
- type: map_at_20
value: 19.978
- type: map_at_100
value: 20.108
- type: map_at_1000
value: 20.125
- type: recall_at_1
value: 8.889
- type: recall_at_3
value: 12.222
- type: recall_at_5
value: 25.0
- type: recall_at_10
value: 59.443999999999996
- type: recall_at_20
value: 93.333
- type: recall_at_100
value: 97.77799999999999
- type: recall_at_1000
value: 100.0
- type: precision_at_1
value: 8.889
- type: precision_at_3
value: 4.074
- type: precision_at_5
value: 5.0
- type: precision_at_10
value: 5.944
- type: precision_at_20
value: 4.667000000000001
- type: precision_at_100
value: 0.9780000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 3.8889
- type: mrr_at_3
value: 8.9815
- type: mrr_at_5
value: 10.2593
- type: mrr_at_10
value: 15.263399999999999
- type: mrr_at_20
value: 17.711
- type: mrr_at_100
value: 17.8421
- type: mrr_at_1000
value: 17.8596
- type: nauc_ndcg_at_1_max
value: -40.8791
- type: nauc_ndcg_at_1_std
value: -22.7629
- type: nauc_ndcg_at_1_diff1
value: -23.105
- type: nauc_ndcg_at_3_max
value: -43.187599999999996
- type: nauc_ndcg_at_3_std
value: -26.9994
- type: nauc_ndcg_at_3_diff1
value: -15.4181
- type: nauc_ndcg_at_5_max
value: -37.2549
- type: nauc_ndcg_at_5_std
value: -24.4115
- type: nauc_ndcg_at_5_diff1
value: -5.7322999999999995
- type: nauc_ndcg_at_10_max
value: -36.3471
- type: nauc_ndcg_at_10_std
value: -22.8065
- type: nauc_ndcg_at_10_diff1
value: -5.3767000000000005
- type: nauc_ndcg_at_20_max
value: -35.829100000000004
- type: nauc_ndcg_at_20_std
value: -20.787300000000002
- type: nauc_ndcg_at_20_diff1
value: -9.6038
- type: nauc_ndcg_at_100_max
value: -36.5805
- type: nauc_ndcg_at_100_std
value: -20.1283
- type: nauc_ndcg_at_100_diff1
value: -8.9448
- type: nauc_ndcg_at_1000_max
value: -38.1158
- type: nauc_ndcg_at_1000_std
value: -22.2744
- type: nauc_ndcg_at_1000_diff1
value: -9.8704
- type: nauc_map_at_1_max
value: -40.8791
- type: nauc_map_at_1_std
value: -22.7629
- type: nauc_map_at_1_diff1
value: -23.105
- type: nauc_map_at_3_max
value: -42.559200000000004
- type: nauc_map_at_3_std
value: -25.8594
- type: nauc_map_at_3_diff1
value: -17.2362
- type: nauc_map_at_5_max
value: -38.595800000000004
- type: nauc_map_at_5_std
value: -24.1339
- type: nauc_map_at_5_diff1
value: -10.4452
- type: nauc_map_at_10_max
value: -38.2389
- type: nauc_map_at_10_std
value: -23.453599999999998
- type: nauc_map_at_10_diff1
value: -10.2748
- type: nauc_map_at_20_max
value: -38.8856
- type: nauc_map_at_20_std
value: -23.095499999999998
- type: nauc_map_at_20_diff1
value: -11.695500000000001
- type: nauc_map_at_100_max
value: -38.9696
- type: nauc_map_at_100_std
value: -23.0057
- type: nauc_map_at_100_diff1
value: -11.635900000000001
- type: nauc_map_at_1000_max
value: -39.035399999999996
- type: nauc_map_at_1000_std
value: -23.1075
- type: nauc_map_at_1000_diff1
value: -11.6855
- type: nauc_recall_at_1_max
value: -40.8791
- type: nauc_recall_at_1_std
value: -22.7629
- type: nauc_recall_at_1_diff1
value: -23.105
- type: nauc_recall_at_3_max
value: -44.8047
- type: nauc_recall_at_3_std
value: -29.9296
- type: nauc_recall_at_3_diff1
value: -10.8169
- type: nauc_recall_at_5_max
value: -34.5699
- type: nauc_recall_at_5_std
value: -24.9544
- type: nauc_recall_at_5_diff1
value: 3.4269000000000003
- type: nauc_recall_at_10_max
value: -32.149699999999996
- type: nauc_recall_at_10_std
value: -21.0142
- type: nauc_recall_at_10_diff1
value: 4.358
- type: nauc_recall_at_20_max
value: 0.7547
- type: nauc_recall_at_20_std
value: 7.1739999999999995
- type: nauc_recall_at_20_diff1
value: -3.2252
- type: nauc_recall_at_100_max
value: 41.4332
- type: nauc_recall_at_100_std
value: 86.1111
- type: nauc_recall_at_100_diff1
value: 35.7143
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_precision_at_1_max
value: -40.8791
- type: nauc_precision_at_1_std
value: -22.7629
- type: nauc_precision_at_1_diff1
value: -23.105
- type: nauc_precision_at_3_max
value: -44.8047
- type: nauc_precision_at_3_std
value: -29.9296
- type: nauc_precision_at_3_diff1
value: -10.8169
- type: nauc_precision_at_5_max
value: -34.5699
- type: nauc_precision_at_5_std
value: -24.9544
- type: nauc_precision_at_5_diff1
value: 3.4269000000000003
- type: nauc_precision_at_10_max
value: -32.149699999999996
- type: nauc_precision_at_10_std
value: -21.0142
- type: nauc_precision_at_10_diff1
value: 4.358
- type: nauc_precision_at_20_max
value: 0.7547
- type: nauc_precision_at_20_std
value: 7.1739999999999995
- type: nauc_precision_at_20_diff1
value: -3.2252
- type: nauc_precision_at_100_max
value: 41.4332
- type: nauc_precision_at_100_std
value: 86.1111
- type: nauc_precision_at_100_diff1
value: 35.7143
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 100.0
- type: nauc_precision_at_1000_diff1
value: 100.0
- type: nauc_mrr_at_1_max
value: -42.7345
- type: nauc_mrr_at_1_std
value: -35.9194
- type: nauc_mrr_at_1_diff1
value: -3.8369
- type: nauc_mrr_at_3_max
value: -35.497099999999996
- type: nauc_mrr_at_3_std
value: -28.1283
- type: nauc_mrr_at_3_diff1
value: 22.5336
- type: nauc_mrr_at_5_max
value: -34.9895
- type: nauc_mrr_at_5_std
value: -26.9499
- type: nauc_mrr_at_5_diff1
value: 16.9652
- type: nauc_mrr_at_10_max
value: -36.7778
- type: nauc_mrr_at_10_std
value: -28.069
- type: nauc_mrr_at_10_diff1
value: 18.806700000000003
- type: nauc_mrr_at_20_max
value: -36.2726
- type: nauc_mrr_at_20_std
value: -26.359500000000004
- type: nauc_mrr_at_20_diff1
value: 18.1655
- type: nauc_mrr_at_100_max
value: -36.361
- type: nauc_mrr_at_100_std
value: -26.280900000000003
- type: nauc_mrr_at_100_diff1
value: 18.5228
- type: nauc_mrr_at_1000_max
value: -36.4424
- type: nauc_mrr_at_1000_std
value: -26.415699999999998
- type: nauc_mrr_at_1000_diff1
value: 18.496499999999997
- type: main_score
value: 26.888
- task:
type: Retrieval
dataset:
name: MTEB CosQA (default)
type: CoIR-Retrieval/cosqa
config: default
split: test
revision: bc5efb7e9d437246ce393ed19d772e08e4a79535
metrics:
- type: ndcg_at_1
value: 15.4
- type: ndcg_at_3
value: 23.59
- type: ndcg_at_5
value: 29.779
- type: ndcg_at_10
value: 35.449999999999996
- type: ndcg_at_20
value: 38.309
- type: ndcg_at_100
value: 41.980000000000004
- type: ndcg_at_1000
value: 42.917
- type: map_at_1
value: 15.4
- type: map_at_3
value: 21.4
- type: map_at_5
value: 24.84
- type: map_at_10
value: 27.245
- type: map_at_20
value: 28.043000000000003
- type: map_at_100
value: 28.592000000000002
- type: map_at_1000
value: 28.63
- type: recall_at_1
value: 15.4
- type: recall_at_3
value: 30.0
- type: recall_at_5
value: 45.0
- type: recall_at_10
value: 62.2
- type: recall_at_20
value: 73.4
- type: recall_at_100
value: 92.60000000000001
- type: recall_at_1000
value: 99.8
- type: precision_at_1
value: 15.4
- type: precision_at_3
value: 10.0
- type: precision_at_5
value: 9.0
- type: precision_at_10
value: 6.22
- type: precision_at_20
value: 3.6700000000000004
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 13.600000000000001
- type: mrr_at_3
value: 19.666700000000002
- type: mrr_at_5
value: 22.0867
- type: mrr_at_10
value: 25.020799999999998
- type: mrr_at_20
value: 25.8896
- type: mrr_at_100
value: 26.434400000000004
- type: mrr_at_1000
value: 26.4729
- type: nauc_ndcg_at_1_max
value: 7.9282
- type: nauc_ndcg_at_1_std
value: -14.053299999999998
- type: nauc_ndcg_at_1_diff1
value: 36.687799999999996
- type: nauc_ndcg_at_3_max
value: 11.969899999999999
- type: nauc_ndcg_at_3_std
value: -13.7404
- type: nauc_ndcg_at_3_diff1
value: 22.2386
- type: nauc_ndcg_at_5_max
value: 13.4812
- type: nauc_ndcg_at_5_std
value: -13.2079
- type: nauc_ndcg_at_5_diff1
value: 15.8384
- type: nauc_ndcg_at_10_max
value: 12.061399999999999
- type: nauc_ndcg_at_10_std
value: -15.1337
- type: nauc_ndcg_at_10_diff1
value: 18.804399999999998
- type: nauc_ndcg_at_20_max
value: 14.027000000000001
- type: nauc_ndcg_at_20_std
value: -13.123899999999999
- type: nauc_ndcg_at_20_diff1
value: 18.546499999999998
- type: nauc_ndcg_at_100_max
value: 15.4228
- type: nauc_ndcg_at_100_std
value: -9.7982
- type: nauc_ndcg_at_100_diff1
value: 20.637900000000002
- type: nauc_ndcg_at_1000_max
value: 13.3878
- type: nauc_ndcg_at_1000_std
value: -12.3766
- type: nauc_ndcg_at_1000_diff1
value: 21.2979
- type: nauc_map_at_1_max
value: 7.9282
- type: nauc_map_at_1_std
value: -14.053299999999998
- type: nauc_map_at_1_diff1
value: 36.687799999999996
- type: nauc_map_at_3_max
value: 11.2376
- type: nauc_map_at_3_std
value: -13.882800000000001
- type: nauc_map_at_3_diff1
value: 25.4638
- type: nauc_map_at_5_max
value: 12.0973
- type: nauc_map_at_5_std
value: -13.581399999999999
- type: nauc_map_at_5_diff1
value: 21.6642
- type: nauc_map_at_10_max
value: 11.4818
- type: nauc_map_at_10_std
value: -14.3841
- type: nauc_map_at_10_diff1
value: 23.0484
- type: nauc_map_at_20_max
value: 11.9802
- type: nauc_map_at_20_std
value: -13.8687
- type: nauc_map_at_20_diff1
value: 23.0349
- type: nauc_map_at_100_max
value: 12.112
- type: nauc_map_at_100_std
value: -13.423099999999998
- type: nauc_map_at_100_diff1
value: 23.385
- type: nauc_map_at_1000_max
value: 12.034
- type: nauc_map_at_1000_std
value: -13.5156
- type: nauc_map_at_1000_diff1
value: 23.4084
- type: nauc_recall_at_1_max
value: 7.9282
- type: nauc_recall_at_1_std
value: -14.053299999999998
- type: nauc_recall_at_1_diff1
value: 36.687799999999996
- type: nauc_recall_at_3_max
value: 13.6773
- type: nauc_recall_at_3_std
value: -13.376299999999999
- type: nauc_recall_at_3_diff1
value: 14.4918
- type: nauc_recall_at_5_max
value: 16.8852
- type: nauc_recall_at_5_std
value: -12.237499999999999
- type: nauc_recall_at_5_diff1
value: 1.4449
- type: nauc_recall_at_10_max
value: 13.234499999999999
- type: nauc_recall_at_10_std
value: -17.8241
- type: nauc_recall_at_10_diff1
value: 7.6404
- type: nauc_recall_at_20_max
value: 22.708000000000002
- type: nauc_recall_at_20_std
value: -9.111600000000001
- type: nauc_recall_at_20_diff1
value: 3.4109
- type: nauc_recall_at_100_max
value: 66.1165
- type: nauc_recall_at_100_std
value: 55.2477
- type: nauc_recall_at_100_diff1
value: 5.7612
- type: nauc_recall_at_1000_max
value: 100.0
- type: nauc_recall_at_1000_std
value: 86.9281
- type: nauc_recall_at_1000_diff1
value: 72.2222
- type: nauc_precision_at_1_max
value: 7.9282
- type: nauc_precision_at_1_std
value: -14.053299999999998
- type: nauc_precision_at_1_diff1
value: 36.687799999999996
- type: nauc_precision_at_3_max
value: 13.6773
- type: nauc_precision_at_3_std
value: -13.376299999999999
- type: nauc_precision_at_3_diff1
value: 14.4918
- type: nauc_precision_at_5_max
value: 16.8852
- type: nauc_precision_at_5_std
value: -12.237499999999999
- type: nauc_precision_at_5_diff1
value: 1.4449
- type: nauc_precision_at_10_max
value: 13.234499999999999
- type: nauc_precision_at_10_std
value: -17.8241
- type: nauc_precision_at_10_diff1
value: 7.6404
- type: nauc_precision_at_20_max
value: 22.708000000000002
- type: nauc_precision_at_20_std
value: -9.111600000000001
- type: nauc_precision_at_20_diff1
value: 3.4109
- type: nauc_precision_at_100_max
value: 66.1165
- type: nauc_precision_at_100_std
value: 55.2477
- type: nauc_precision_at_100_diff1
value: 5.7612
- type: nauc_precision_at_1000_max
value: 100.0
- type: nauc_precision_at_1000_std
value: 86.9281
- type: nauc_precision_at_1000_diff1
value: 72.2222
- type: nauc_mrr_at_1_max
value: 13.238199999999999
- type: nauc_mrr_at_1_std
value: -21.1942
- type: nauc_mrr_at_1_diff1
value: 47.1481
- type: nauc_mrr_at_3_max
value: 13.370999999999999
- type: nauc_mrr_at_3_std
value: -18.0171
- type: nauc_mrr_at_3_diff1
value: 31.3232
- type: nauc_mrr_at_5_max
value: 12.646099999999999
- type: nauc_mrr_at_5_std
value: -18.5601
- type: nauc_mrr_at_5_diff1
value: 28.8561
- type: nauc_mrr_at_10_max
value: 13.1101
- type: nauc_mrr_at_10_std
value: -18.915000000000003
- type: nauc_mrr_at_10_diff1
value: 28.9512
- type: nauc_mrr_at_20_max
value: 13.0191
- type: nauc_mrr_at_20_std
value: -18.501
- type: nauc_mrr_at_20_diff1
value: 29.102299999999996
- type: nauc_mrr_at_100_max
value: 13.475699999999998
- type: nauc_mrr_at_100_std
value: -17.9907
- type: nauc_mrr_at_100_diff1
value: 29.549999999999997
- type: nauc_mrr_at_1000_max
value: 13.3963
- type: nauc_mrr_at_1000_std
value: -18.093999999999998
- type: nauc_mrr_at_1000_diff1
value: 29.583
- type: main_score
value: 35.449999999999996
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: ndcg_at_1
value: 51.37500000000001
- type: ndcg_at_3
value: 41.275
- type: ndcg_at_5
value: 38.297
- type: ndcg_at_10
value: 35.96
- type: ndcg_at_20
value: 35.117
- type: ndcg_at_100
value: 39.878
- type: ndcg_at_1000
value: 47.931000000000004
- type: map_at_1
value: 8.651
- type: map_at_3
value: 13.51
- type: map_at_5
value: 15.468000000000002
- type: map_at_10
value: 17.628
- type: map_at_20
value: 19.786
- type: map_at_100
value: 23.354
- type: map_at_1000
value: 24.826
- type: recall_at_1
value: 8.651
- type: recall_at_3
value: 14.847
- type: recall_at_5
value: 18.04
- type: recall_at_10
value: 22.416
- type: recall_at_20
value: 28.136
- type: recall_at_100
value: 46.381
- type: recall_at_1000
value: 71.557
- type: precision_at_1
value: 64.5
- type: precision_at_3
value: 44.417
- type: precision_at_5
value: 36.6
- type: precision_at_10
value: 27.450000000000003
- type: precision_at_20
value: 19.811999999999998
- type: precision_at_100
value: 8.405
- type: precision_at_1000
value: 1.923
- type: mrr_at_1
value: 64.5
- type: mrr_at_3
value: 70.25
- type: mrr_at_5
value: 71.275
- type: mrr_at_10
value: 71.9889
- type: mrr_at_20
value: 72.207
- type: mrr_at_100
value: 72.33239999999999
- type: mrr_at_1000
value: 72.3461
- type: nauc_ndcg_at_1_max
value: 31.932100000000002
- type: nauc_ndcg_at_1_std
value: 10.2841
- type: nauc_ndcg_at_1_diff1
value: 36.07
- type: nauc_ndcg_at_3_max
value: 29.2531
- type: nauc_ndcg_at_3_std
value: 11.178799999999999
- type: nauc_ndcg_at_3_diff1
value: 25.764799999999997
- type: nauc_ndcg_at_5_max
value: 27.1826
- type: nauc_ndcg_at_5_std
value: 12.5
- type: nauc_ndcg_at_5_diff1
value: 24.9511
- type: nauc_ndcg_at_10_max
value: 24.1388
- type: nauc_ndcg_at_10_std
value: 11.350200000000001
- type: nauc_ndcg_at_10_diff1
value: 23.7319
- type: nauc_ndcg_at_20_max
value: 19.1396
- type: nauc_ndcg_at_20_std
value: 9.464699999999999
- type: nauc_ndcg_at_20_diff1
value: 20.9192
- type: nauc_ndcg_at_100_max
value: 20.1158
- type: nauc_ndcg_at_100_std
value: 13.2815
- type: nauc_ndcg_at_100_diff1
value: 21.221400000000003
- type: nauc_ndcg_at_1000_max
value: 26.648899999999998
- type: nauc_ndcg_at_1000_std
value: 22.5347
- type: nauc_ndcg_at_1000_diff1
value: 19.6168
- type: nauc_map_at_1_max
value: -4.3177
- type: nauc_map_at_1_std
value: -24.5562
- type: nauc_map_at_1_diff1
value: 29.4423
- type: nauc_map_at_3_max
value: -3.3966000000000003
- type: nauc_map_at_3_std
value: -21.9222
- type: nauc_map_at_3_diff1
value: 21.2481
- type: nauc_map_at_5_max
value: -1.1166
- type: nauc_map_at_5_std
value: -17.1077
- type: nauc_map_at_5_diff1
value: 19.9608
- type: nauc_map_at_10_max
value: 2.8669000000000002
- type: nauc_map_at_10_std
value: -11.6119
- type: nauc_map_at_10_diff1
value: 19.6247
- type: nauc_map_at_20_max
value: 6.4855
- type: nauc_map_at_20_std
value: -4.1277
- type: nauc_map_at_20_diff1
value: 18.1824
- type: nauc_map_at_100_max
value: 12.971499999999999
- type: nauc_map_at_100_std
value: 7.603400000000001
- type: nauc_map_at_100_diff1
value: 17.5644
- type: nauc_map_at_1000_max
value: 15.277299999999999
- type: nauc_map_at_1000_std
value: 10.5578
- type: nauc_map_at_1000_diff1
value: 17.1155
- type: nauc_recall_at_1_max
value: -4.3177
- type: nauc_recall_at_1_std
value: -24.5562
- type: nauc_recall_at_1_diff1
value: 29.4423
- type: nauc_recall_at_3_max
value: -6.2376000000000005
- type: nauc_recall_at_3_std
value: -23.4233
- type: nauc_recall_at_3_diff1
value: 17.329800000000002
- type: nauc_recall_at_5_max
value: -3.4825000000000004
- type: nauc_recall_at_5_std
value: -17.4895
- type: nauc_recall_at_5_diff1
value: 16.2379
- type: nauc_recall_at_10_max
value: 0.9988
- type: nauc_recall_at_10_std
value: -11.1992
- type: nauc_recall_at_10_diff1
value: 16.225
- type: nauc_recall_at_20_max
value: 4.693300000000001
- type: nauc_recall_at_20_std
value: -1.8259999999999998
- type: nauc_recall_at_20_diff1
value: 12.612400000000001
- type: nauc_recall_at_100_max
value: 13.420599999999999
- type: nauc_recall_at_100_std
value: 14.4476
- type: nauc_recall_at_100_diff1
value: 14.5736
- type: nauc_recall_at_1000_max
value: 18.4052
- type: nauc_recall_at_1000_std
value: 32.6262
- type: nauc_recall_at_1000_diff1
value: 6.2448
- type: nauc_precision_at_1_max
value: 44.2395
- type: nauc_precision_at_1_std
value: 16.9766
- type: nauc_precision_at_1_diff1
value: 42.981
- type: nauc_precision_at_3_max
value: 37.5078
- type: nauc_precision_at_3_std
value: 24.46
- type: nauc_precision_at_3_diff1
value: 16.700799999999997
- type: nauc_precision_at_5_max
value: 39.9766
- type: nauc_precision_at_5_std
value: 35.1485
- type: nauc_precision_at_5_diff1
value: 13.0716
- type: nauc_precision_at_10_max
value: 39.642500000000005
- type: nauc_precision_at_10_std
value: 41.8067
- type: nauc_precision_at_10_diff1
value: 8.864700000000001
- type: nauc_precision_at_20_max
value: 36.7342
- type: nauc_precision_at_20_std
value: 47.144200000000005
- type: nauc_precision_at_20_diff1
value: 3.6226000000000003
- type: nauc_precision_at_100_max
value: 35.3062
- type: nauc_precision_at_100_std
value: 47.2687
- type: nauc_precision_at_100_diff1
value: 0.0039
- type: nauc_precision_at_1000_max
value: 27.387099999999997
- type: nauc_precision_at_1000_std
value: 24.4162
- type: nauc_precision_at_1000_diff1
value: -13.5
- type: nauc_mrr_at_1_max
value: 44.2395
- type: nauc_mrr_at_1_std
value: 16.9766
- type: nauc_mrr_at_1_diff1
value: 42.981
- type: nauc_mrr_at_3_max
value: 45.9027
- type: nauc_mrr_at_3_std
value: 16.3998
- type: nauc_mrr_at_3_diff1
value: 42.7201
- type: nauc_mrr_at_5_max
value: 46.7905
- type: nauc_mrr_at_5_std
value: 17.921599999999998
- type: nauc_mrr_at_5_diff1
value: 42.4334
- type: nauc_mrr_at_10_max
value: 46.775
- type: nauc_mrr_at_10_std
value: 18.282899999999998
- type: nauc_mrr_at_10_diff1
value: 42.4501
- type: nauc_mrr_at_20_max
value: 46.671600000000005
- type: nauc_mrr_at_20_std
value: 18.064700000000002
- type: nauc_mrr_at_20_diff1
value: 42.4331
- type: nauc_mrr_at_100_max
value: 46.7118
- type: nauc_mrr_at_100_std
value: 18.2135
- type: nauc_mrr_at_100_diff1
value: 42.4809
- type: nauc_mrr_at_1000_max
value: 46.6966
- type: nauc_mrr_at_1000_std
value: 18.185200000000002
- type: nauc_mrr_at_1000_diff1
value: 42.4844
- type: main_score
value: 35.96
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 38.795
- type: f1
value: 35.2399
- type: f1_weighted
value: 40.7945
- type: main_score
value: 38.795
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: ndcg_at_1
value: 79.08800000000001
- type: ndcg_at_3
value: 83.943
- type: ndcg_at_5
value: 84.878
- type: ndcg_at_10
value: 85.528
- type: ndcg_at_20
value: 85.842
- type: ndcg_at_100
value: 86.134
- type: ndcg_at_1000
value: 86.367
- type: map_at_1
value: 73.211
- type: map_at_3
value: 80.5
- type: map_at_5
value: 81.134
- type: map_at_10
value: 81.463
- type: map_at_20
value: 81.566
- type: map_at_100
value: 81.622
- type: map_at_1000
value: 81.634
- type: recall_at_1
value: 73.211
- type: recall_at_3
value: 88.32799999999999
- type: recall_at_5
value: 90.821
- type: recall_at_10
value: 92.797
- type: recall_at_20
value: 93.932
- type: recall_at_100
value: 95.26299999999999
- type: recall_at_1000
value: 96.738
- type: precision_at_1
value: 79.08800000000001
- type: precision_at_3
value: 31.963
- type: precision_at_5
value: 19.769000000000002
- type: precision_at_10
value: 10.132
- type: precision_at_20
value: 5.149
- type: precision_at_100
value: 1.055
- type: precision_at_1000
value: 0.109
- type: mrr_at_1
value: 79.0879
- type: mrr_at_3
value: 86.1536
- type: mrr_at_5
value: 86.7004
- type: mrr_at_10
value: 86.9425
- type: mrr_at_20
value: 87.00099999999999
- type: mrr_at_100
value: 87.01719999999999
- type: mrr_at_1000
value: 87.01769999999999
- type: nauc_ndcg_at_1_max
value: 28.2184
- type: nauc_ndcg_at_1_std
value: -20.374200000000002
- type: nauc_ndcg_at_1_diff1
value: 64.4185
- type: nauc_ndcg_at_3_max
value: 22.014
- type: nauc_ndcg_at_3_std
value: -15.221699999999998
- type: nauc_ndcg_at_3_diff1
value: 47.511700000000005
- type: nauc_ndcg_at_5_max
value: 21.381700000000002
- type: nauc_ndcg_at_5_std
value: -14.3711
- type: nauc_ndcg_at_5_diff1
value: 46.6271
- type: nauc_ndcg_at_10_max
value: 20.4251
- type: nauc_ndcg_at_10_std
value: -13.3096
- type: nauc_ndcg_at_10_diff1
value: 46.1205
- type: nauc_ndcg_at_20_max
value: 20.686
- type: nauc_ndcg_at_20_std
value: -12.6058
- type: nauc_ndcg_at_20_diff1
value: 46.14
- type: nauc_ndcg_at_100_max
value: 20.657700000000002
- type: nauc_ndcg_at_100_std
value: -12.5531
- type: nauc_ndcg_at_100_diff1
value: 46.3788
- type: nauc_ndcg_at_1000_max
value: 21.0177
- type: nauc_ndcg_at_1000_std
value: -12.8318
- type: nauc_ndcg_at_1000_diff1
value: 46.8648
- type: nauc_map_at_1_max
value: 21.4975
- type: nauc_map_at_1_std
value: -14.5207
- type: nauc_map_at_1_diff1
value: 51.53959999999999
- type: nauc_map_at_3_max
value: 20.322699999999998
- type: nauc_map_at_3_std
value: -13.8986
- type: nauc_map_at_3_diff1
value: 46.3932
- type: nauc_map_at_5_max
value: 20.3296
- type: nauc_map_at_5_std
value: -13.5416
- type: nauc_map_at_5_diff1
value: 46.1518
- type: nauc_map_at_10_max
value: 20.0385
- type: nauc_map_at_10_std
value: -13.239999999999998
- type: nauc_map_at_10_diff1
value: 46.061800000000005
- type: nauc_map_at_20_max
value: 20.113300000000002
- type: nauc_map_at_20_std
value: -13.0931
- type: nauc_map_at_20_diff1
value: 46.091
- type: nauc_map_at_100_max
value: 20.1262
- type: nauc_map_at_100_std
value: -13.0646
- type: nauc_map_at_100_diff1
value: 46.1321
- type: nauc_map_at_1000_max
value: 20.1391
- type: nauc_map_at_1000_std
value: -13.069600000000001
- type: nauc_map_at_1000_diff1
value: 46.1501
- type: nauc_recall_at_1_max
value: 21.4975
- type: nauc_recall_at_1_std
value: -14.5207
- type: nauc_recall_at_1_diff1
value: 51.53959999999999
- type: nauc_recall_at_3_max
value: 15.379399999999999
- type: nauc_recall_at_3_std
value: -9.9735
- type: nauc_recall_at_3_diff1
value: 30.6769
- type: nauc_recall_at_5_max
value: 13.104099999999999
- type: nauc_recall_at_5_std
value: -6.2273000000000005
- type: nauc_recall_at_5_diff1
value: 24.4602
- type: nauc_recall_at_10_max
value: 6.4093
- type: nauc_recall_at_10_std
value: 0.9238
- type: nauc_recall_at_10_diff1
value: 16.2715
- type: nauc_recall_at_20_max
value: 5.5285
- type: nauc_recall_at_20_std
value: 9.1474
- type: nauc_recall_at_20_diff1
value: 10.8034
- type: nauc_recall_at_100_max
value: -0.116
- type: nauc_recall_at_100_std
value: 14.4612
- type: nauc_recall_at_100_diff1
value: 4.6372
- type: nauc_recall_at_1000_max
value: -1.595
- type: nauc_recall_at_1000_std
value: 18.1495
- type: nauc_recall_at_1000_diff1
value: -0.022000000000000002
- type: nauc_precision_at_1_max
value: 28.2184
- type: nauc_precision_at_1_std
value: -20.374200000000002
- type: nauc_precision_at_1_diff1
value: 64.4185
- type: nauc_precision_at_3_max
value: 24.238799999999998
- type: nauc_precision_at_3_std
value: -19.7064
- type: nauc_precision_at_3_diff1
value: 37.7498
- type: nauc_precision_at_5_max
value: 20.8308
- type: nauc_precision_at_5_std
value: -13.6486
- type: nauc_precision_at_5_diff1
value: 23.3404
- type: nauc_precision_at_10_max
value: 9.4386
- type: nauc_precision_at_10_std
value: -4.8239
- type: nauc_precision_at_10_diff1
value: 6.8594
- type: nauc_precision_at_20_max
value: 9.0063
- type: nauc_precision_at_20_std
value: 4.0311
- type: nauc_precision_at_20_diff1
value: -2.9298
- type: nauc_precision_at_100_max
value: 5.1057
- type: nauc_precision_at_100_std
value: 7.3903
- type: nauc_precision_at_100_diff1
value: -8.7148
- type: nauc_precision_at_1000_max
value: 6.3359
- type: nauc_precision_at_1000_std
value: 3.9797
- type: nauc_precision_at_1000_diff1
value: -8.3131
- type: nauc_mrr_at_1_max
value: 28.2184
- type: nauc_mrr_at_1_std
value: -20.374200000000002
- type: nauc_mrr_at_1_diff1
value: 64.4185
- type: nauc_mrr_at_3_max
value: 29.7481
- type: nauc_mrr_at_3_std
value: -21.9924
- type: nauc_mrr_at_3_diff1
value: 62.5737
- type: nauc_mrr_at_5_max
value: 29.8062
- type: nauc_mrr_at_5_std
value: -22.078
- type: nauc_mrr_at_5_diff1
value: 62.9
- type: nauc_mrr_at_10_max
value: 29.641000000000002
- type: nauc_mrr_at_10_std
value: -21.6827
- type: nauc_mrr_at_10_diff1
value: 62.944599999999994
- type: nauc_mrr_at_20_max
value: 29.6535
- type: nauc_mrr_at_20_std
value: -21.520400000000002
- type: nauc_mrr_at_20_diff1
value: 62.9583
- type: nauc_mrr_at_100_max
value: 29.622799999999998
- type: nauc_mrr_at_100_std
value: -21.5393
- type: nauc_mrr_at_100_diff1
value: 62.9658
- type: nauc_mrr_at_1000_max
value: 29.619400000000002
- type: nauc_mrr_at_1000_std
value: -21.5417
- type: nauc_mrr_at_1000_diff1
value: 62.96469999999999
- type: main_score
value: 85.528
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: ndcg_at_1
value: 35.494
- type: ndcg_at_3
value: 32.305
- type: ndcg_at_5
value: 34.332
- type: ndcg_at_10
value: 36.851
- type: ndcg_at_20
value: 39.31
- type: ndcg_at_100
value: 43.462
- type: ndcg_at_1000
value: 46.766000000000005
- type: map_at_1
value: 18.311
- type: map_at_3
value: 24.778
- type: map_at_5
value: 27.453
- type: map_at_10
value: 29.198
- type: map_at_20
value: 30.118000000000002
- type: map_at_100
value: 30.930000000000003
- type: map_at_1000
value: 31.115
- type: recall_at_1
value: 18.311
- type: recall_at_3
value: 28.823999999999998
- type: recall_at_5
value: 36.178
- type: recall_at_10
value: 43.842
- type: recall_at_20
value: 51.370000000000005
- type: recall_at_100
value: 68.593
- type: recall_at_1000
value: 88.55
- type: precision_at_1
value: 35.494
- type: precision_at_3
value: 21.142
- type: precision_at_5
value: 16.326999999999998
- type: precision_at_10
value: 10.309
- type: precision_at_20
value: 6.211
- type: precision_at_100
value: 1.7069999999999999
- type: precision_at_1000
value: 0.22899999999999998
- type: mrr_at_1
value: 35.4938
- type: mrr_at_3
value: 41.6667
- type: mrr_at_5
value: 43.4182
- type: mrr_at_10
value: 44.4732
- type: mrr_at_20
value: 44.969
- type: mrr_at_100
value: 45.318599999999996
- type: mrr_at_1000
value: 45.3674
- type: nauc_ndcg_at_1_max
value: 33.946799999999996
- type: nauc_ndcg_at_1_std
value: -5.282
- type: nauc_ndcg_at_1_diff1
value: 47.413
- type: nauc_ndcg_at_3_max
value: 30.9073
- type: nauc_ndcg_at_3_std
value: -2.2498
- type: nauc_ndcg_at_3_diff1
value: 38.548500000000004
- type: nauc_ndcg_at_5_max
value: 30.2537
- type: nauc_ndcg_at_5_std
value: -0.9919000000000001
- type: nauc_ndcg_at_5_diff1
value: 37.988499999999995
- type: nauc_ndcg_at_10_max
value: 30.5224
- type: nauc_ndcg_at_10_std
value: 0.0762
- type: nauc_ndcg_at_10_diff1
value: 38.2531
- type: nauc_ndcg_at_20_max
value: 32.173
- type: nauc_ndcg_at_20_std
value: 3.3266999999999998
- type: nauc_ndcg_at_20_diff1
value: 37.5071
- type: nauc_ndcg_at_100_max
value: 33.551700000000004
- type: nauc_ndcg_at_100_std
value: 5.8902
- type: nauc_ndcg_at_100_diff1
value: 37.3363
- type: nauc_ndcg_at_1000_max
value: 34.1671
- type: nauc_ndcg_at_1000_std
value: 5.4682
- type: nauc_ndcg_at_1000_diff1
value: 37.5779
- type: nauc_map_at_1_max
value: 20.0425
- type: nauc_map_at_1_std
value: -7.41
- type: nauc_map_at_1_diff1
value: 40.725699999999996
- type: nauc_map_at_3_max
value: 25.380799999999997
- type: nauc_map_at_3_std
value: -4.5524000000000004
- type: nauc_map_at_3_diff1
value: 38.960699999999996
- type: nauc_map_at_5_max
value: 27.208900000000003
- type: nauc_map_at_5_std
value: -3.034
- type: nauc_map_at_5_diff1
value: 38.475500000000004
- type: nauc_map_at_10_max
value: 28.6066
- type: nauc_map_at_10_std
value: -2.1042
- type: nauc_map_at_10_diff1
value: 38.4411
- type: nauc_map_at_20_max
value: 29.3931
- type: nauc_map_at_20_std
value: -0.8289
- type: nauc_map_at_20_diff1
value: 38.137
- type: nauc_map_at_100_max
value: 29.8041
- type: nauc_map_at_100_std
value: -0.1992
- type: nauc_map_at_100_diff1
value: 38.0546
- type: nauc_map_at_1000_max
value: 29.886400000000002
- type: nauc_map_at_1000_std
value: -0.1638
- type: nauc_map_at_1000_diff1
value: 38.0646
- type: nauc_recall_at_1_max
value: 20.0425
- type: nauc_recall_at_1_std
value: -7.41
- type: nauc_recall_at_1_diff1
value: 40.725699999999996
- type: nauc_recall_at_3_max
value: 20.8038
- type: nauc_recall_at_3_std
value: -4.1075
- type: nauc_recall_at_3_diff1
value: 33.0009
- type: nauc_recall_at_5_max
value: 23.1816
- type: nauc_recall_at_5_std
value: 0.2681
- type: nauc_recall_at_5_diff1
value: 30.1663
- type: nauc_recall_at_10_max
value: 23.754
- type: nauc_recall_at_10_std
value: 2.4185000000000003
- type: nauc_recall_at_10_diff1
value: 28.475499999999997
- type: nauc_recall_at_20_max
value: 27.711599999999997
- type: nauc_recall_at_20_std
value: 12.509700000000002
- type: nauc_recall_at_20_diff1
value: 25.172299999999996
- type: nauc_recall_at_100_max
value: 29.3806
- type: nauc_recall_at_100_std
value: 25.1963
- type: nauc_recall_at_100_diff1
value: 21.849
- type: nauc_recall_at_1000_max
value: 34.1492
- type: nauc_recall_at_1000_std
value: 40.4872
- type: nauc_recall_at_1000_diff1
value: 17.0167
- type: nauc_precision_at_1_max
value: 33.946799999999996
- type: nauc_precision_at_1_std
value: -5.282
- type: nauc_precision_at_1_diff1
value: 47.413
- type: nauc_precision_at_3_max
value: 36.6837
- type: nauc_precision_at_3_std
value: 3.7282
- type: nauc_precision_at_3_diff1
value: 31.0152
- type: nauc_precision_at_5_max
value: 37.6087
- type: nauc_precision_at_5_std
value: 7.3439000000000005
- type: nauc_precision_at_5_diff1
value: 27.2321
- type: nauc_precision_at_10_max
value: 38.2792
- type: nauc_precision_at_10_std
value: 11.3814
- type: nauc_precision_at_10_diff1
value: 22.6494
- type: nauc_precision_at_20_max
value: 38.455
- type: nauc_precision_at_20_std
value: 17.4053
- type: nauc_precision_at_20_diff1
value: 16.8265
- type: nauc_precision_at_100_max
value: 36.203
- type: nauc_precision_at_100_std
value: 22.2758
- type: nauc_precision_at_100_diff1
value: 8.3908
- type: nauc_precision_at_1000_max
value: 29.599700000000002
- type: nauc_precision_at_1000_std
value: 17.186899999999998
- type: nauc_precision_at_1000_diff1
value: 0.0332
- type: nauc_mrr_at_1_max
value: 33.946799999999996
- type: nauc_mrr_at_1_std
value: -5.282
- type: nauc_mrr_at_1_diff1
value: 47.413
- type: nauc_mrr_at_3_max
value: 34.0785
- type: nauc_mrr_at_3_std
value: -2.1323000000000003
- type: nauc_mrr_at_3_diff1
value: 43.8661
- type: nauc_mrr_at_5_max
value: 34.244
- type: nauc_mrr_at_5_std
value: -1.5425
- type: nauc_mrr_at_5_diff1
value: 43.7631
- type: nauc_mrr_at_10_max
value: 34.265299999999996
- type: nauc_mrr_at_10_std
value: -1.1494
- type: nauc_mrr_at_10_diff1
value: 43.639
- type: nauc_mrr_at_20_max
value: 34.5648
- type: nauc_mrr_at_20_std
value: -0.6076
- type: nauc_mrr_at_20_diff1
value: 43.431
- type: nauc_mrr_at_100_max
value: 34.571400000000004
- type: nauc_mrr_at_100_std
value: -0.5074000000000001
- type: nauc_mrr_at_100_diff1
value: 43.4003
- type: nauc_mrr_at_1000_max
value: 34.5576
- type: nauc_mrr_at_1000_std
value: -0.534
- type: nauc_mrr_at_1000_diff1
value: 43.4086
- type: main_score
value: 36.851
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: ndcg_at_1
value: 73.531
- type: ndcg_at_3
value: 58.24700000000001
- type: ndcg_at_5
value: 60.905
- type: ndcg_at_10
value: 62.918
- type: ndcg_at_20
value: 64.297
- type: ndcg_at_100
value: 66.056
- type: ndcg_at_1000
value: 67.554
- type: map_at_1
value: 36.766
- type: map_at_3
value: 50.427
- type: map_at_5
value: 52.449999999999996
- type: map_at_10
value: 53.639
- type: map_at_20
value: 54.17999999999999
- type: map_at_100
value: 54.532000000000004
- type: map_at_1000
value: 54.608000000000004
- type: recall_at_1
value: 36.766
- type: recall_at_3
value: 54.835
- type: recall_at_5
value: 60.080999999999996
- type: recall_at_10
value: 65.098
- type: recall_at_20
value: 69.541
- type: recall_at_100
value: 77.306
- type: recall_at_1000
value: 87.252
- type: precision_at_1
value: 73.531
- type: precision_at_3
value: 36.556
- type: precision_at_5
value: 24.032
- type: precision_at_10
value: 13.020000000000001
- type: precision_at_20
value: 6.954000000000001
- type: precision_at_100
value: 1.546
- type: precision_at_1000
value: 0.17500000000000002
- type: mrr_at_1
value: 73.5314
- type: mrr_at_3
value: 78.9489
- type: mrr_at_5
value: 79.7288
- type: mrr_at_10
value: 80.1036
- type: mrr_at_20
value: 80.2602
- type: mrr_at_100
value: 80.3412
- type: mrr_at_1000
value: 80.3512
- type: nauc_ndcg_at_1_max
value: 49.4087
- type: nauc_ndcg_at_1_std
value: -8.233
- type: nauc_ndcg_at_1_diff1
value: 69.19380000000001
- type: nauc_ndcg_at_3_max
value: 29.407899999999998
- type: nauc_ndcg_at_3_std
value: -2.1144
- type: nauc_ndcg_at_3_diff1
value: 27.245599999999996
- type: nauc_ndcg_at_5_max
value: 27.483
- type: nauc_ndcg_at_5_std
value: -0.7036
- type: nauc_ndcg_at_5_diff1
value: 24.2534
- type: nauc_ndcg_at_10_max
value: 26.766499999999997
- type: nauc_ndcg_at_10_std
value: 0.5583
- type: nauc_ndcg_at_10_diff1
value: 22.822300000000002
- type: nauc_ndcg_at_20_max
value: 26.339800000000004
- type: nauc_ndcg_at_20_std
value: 1.3486
- type: nauc_ndcg_at_20_diff1
value: 22.3499
- type: nauc_ndcg_at_100_max
value: 26.436799999999998
- type: nauc_ndcg_at_100_std
value: 2.5304
- type: nauc_ndcg_at_100_diff1
value: 22.372700000000002
- type: nauc_ndcg_at_1000_max
value: 26.9472
- type: nauc_ndcg_at_1000_std
value: 2.3277
- type: nauc_ndcg_at_1000_diff1
value: 23.3345
- type: nauc_map_at_1_max
value: 49.4087
- type: nauc_map_at_1_std
value: -8.233
- type: nauc_map_at_1_diff1
value: 69.19380000000001
- type: nauc_map_at_3_max
value: 25.2676
- type: nauc_map_at_3_std
value: -1.8659999999999999
- type: nauc_map_at_3_diff1
value: 21.0961
- type: nauc_map_at_5_max
value: 24.0651
- type: nauc_map_at_5_std
value: -0.8111
- type: nauc_map_at_5_diff1
value: 19.237099999999998
- type: nauc_map_at_10_max
value: 23.785
- type: nauc_map_at_10_std
value: -0.1037
- type: nauc_map_at_10_diff1
value: 18.5973
- type: nauc_map_at_20_max
value: 23.6813
- type: nauc_map_at_20_std
value: 0.1708
- type: nauc_map_at_20_diff1
value: 18.499299999999998
- type: nauc_map_at_100_max
value: 23.7276
- type: nauc_map_at_100_std
value: 0.3879
- type: nauc_map_at_100_diff1
value: 18.5423
- type: nauc_map_at_1000_max
value: 23.7501
- type: nauc_map_at_1000_std
value: 0.3886
- type: nauc_map_at_1000_diff1
value: 18.578500000000002
- type: nauc_recall_at_1_max
value: 49.4087
- type: nauc_recall_at_1_std
value: -8.233
- type: nauc_recall_at_1_diff1
value: 69.19380000000001
- type: nauc_recall_at_3_max
value: 21.7043
- type: nauc_recall_at_3_std
value: 0.24320000000000003
- type: nauc_recall_at_3_diff1
value: 12.102599999999999
- type: nauc_recall_at_5_max
value: 16.923
- type: nauc_recall_at_5_std
value: 2.9763
- type: nauc_recall_at_5_diff1
value: 5.5262
- type: nauc_recall_at_10_max
value: 13.8286
- type: nauc_recall_at_10_std
value: 6.1254
- type: nauc_recall_at_10_diff1
value: 0.6326
- type: nauc_recall_at_20_max
value: 11.307300000000001
- type: nauc_recall_at_20_std
value: 8.9861
- type: nauc_recall_at_20_diff1
value: -2.5909
- type: nauc_recall_at_100_max
value: 8.2009
- type: nauc_recall_at_100_std
value: 16.051199999999998
- type: nauc_recall_at_100_diff1
value: -7.757699999999999
- type: nauc_recall_at_1000_max
value: 5.4062
- type: nauc_recall_at_1000_std
value: 20.6122
- type: nauc_recall_at_1000_diff1
value: -11.931700000000001
- type: nauc_precision_at_1_max
value: 49.4087
- type: nauc_precision_at_1_std
value: -8.233
- type: nauc_precision_at_1_diff1
value: 69.19380000000001
- type: nauc_precision_at_3_max
value: 21.7043
- type: nauc_precision_at_3_std
value: 0.24320000000000003
- type: nauc_precision_at_3_diff1
value: 12.102599999999999
- type: nauc_precision_at_5_max
value: 16.923
- type: nauc_precision_at_5_std
value: 2.9763
- type: nauc_precision_at_5_diff1
value: 5.5262
- type: nauc_precision_at_10_max
value: 13.8286
- type: nauc_precision_at_10_std
value: 6.1254
- type: nauc_precision_at_10_diff1
value: 0.6326
- type: nauc_precision_at_20_max
value: 11.307300000000001
- type: nauc_precision_at_20_std
value: 8.9861
- type: nauc_precision_at_20_diff1
value: -2.5909
- type: nauc_precision_at_100_max
value: 8.2009
- type: nauc_precision_at_100_std
value: 16.051199999999998
- type: nauc_precision_at_100_diff1
value: -7.757699999999999
- type: nauc_precision_at_1000_max
value: 5.4062
- type: nauc_precision_at_1000_std
value: 20.6122
- type: nauc_precision_at_1000_diff1
value: -11.931700000000001
- type: nauc_mrr_at_1_max
value: 49.4087
- type: nauc_mrr_at_1_std
value: -8.233
- type: nauc_mrr_at_1_diff1
value: 69.19380000000001
- type: nauc_mrr_at_3_max
value: 51.004099999999994
- type: nauc_mrr_at_3_std
value: -6.4677
- type: nauc_mrr_at_3_diff1
value: 66.1969
- type: nauc_mrr_at_5_max
value: 50.880199999999995
- type: nauc_mrr_at_5_std
value: -6.3541
- type: nauc_mrr_at_5_diff1
value: 66.0764
- type: nauc_mrr_at_10_max
value: 50.924899999999994
- type: nauc_mrr_at_10_std
value: -6.2945
- type: nauc_mrr_at_10_diff1
value: 66.2079
- type: nauc_mrr_at_20_max
value: 50.907199999999996
- type: nauc_mrr_at_20_std
value: -6.253
- type: nauc_mrr_at_20_diff1
value: 66.28450000000001
- type: nauc_mrr_at_100_max
value: 50.8991
- type: nauc_mrr_at_100_std
value: -6.2459
- type: nauc_mrr_at_100_diff1
value: 66.3257
- type: nauc_mrr_at_1000_max
value: 50.8934
- type: nauc_mrr_at_1000_std
value: -6.2602
- type: nauc_mrr_at_1000_diff1
value: 66.328
- type: main_score
value: 62.918
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 62.2348
- type: f1
value: 62.0977
- type: f1_weighted
value: 62.0977
- type: ap
value: 57.750800000000005
- type: ap_weighted
value: 57.750800000000005
- type: main_score
value: 62.2348
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: ndcg_at_1
value: 15.085999999999999
- type: ndcg_at_3
value: 23.567
- type: ndcg_at_5
value: 27.066000000000003
- type: ndcg_at_10
value: 30.711
- type: ndcg_at_20
value: 33.251999999999995
- type: ndcg_at_100
value: 37.221
- type: ndcg_at_1000
value: 39.133
- type: map_at_1
value: 14.654
- type: map_at_3
value: 21.234
- type: map_at_5
value: 23.189999999999998
- type: map_at_10
value: 24.72
- type: map_at_20
value: 25.433
- type: map_at_100
value: 25.994
- type: map_at_1000
value: 26.067
- type: recall_at_1
value: 14.654
- type: recall_at_3
value: 29.862
- type: recall_at_5
value: 38.274
- type: recall_at_10
value: 49.341
- type: recall_at_20
value: 59.206
- type: recall_at_100
value: 80.22399999999999
- type: recall_at_1000
value: 95.037
- type: precision_at_1
value: 15.085999999999999
- type: precision_at_3
value: 10.277
- type: precision_at_5
value: 7.922999999999999
- type: precision_at_10
value: 5.132
- type: precision_at_20
value: 3.0949999999999998
- type: precision_at_100
value: 0.845
- type: precision_at_1000
value: 0.101
- type: mrr_at_1
value: 15.085999999999999
- type: mrr_at_3
value: 21.7311
- type: mrr_at_5
value: 23.6738
- type: mrr_at_10
value: 25.184099999999997
- type: mrr_at_20
value: 25.878899999999998
- type: mrr_at_100
value: 26.4216
- type: mrr_at_1000
value: 26.4886
- type: nauc_ndcg_at_1_max
value: 3.3686000000000003
- type: nauc_ndcg_at_1_std
value: -14.960799999999999
- type: nauc_ndcg_at_1_diff1
value: 30.0257
- type: nauc_ndcg_at_3_max
value: 4.3222
- type: nauc_ndcg_at_3_std
value: -15.8473
- type: nauc_ndcg_at_3_diff1
value: 26.935399999999998
- type: nauc_ndcg_at_5_max
value: 4.8392
- type: nauc_ndcg_at_5_std
value: -15.7197
- type: nauc_ndcg_at_5_diff1
value: 26.1067
- type: nauc_ndcg_at_10_max
value: 4.8289
- type: nauc_ndcg_at_10_std
value: -14.713300000000002
- type: nauc_ndcg_at_10_diff1
value: 25.3576
- type: nauc_ndcg_at_20_max
value: 5.2264
- type: nauc_ndcg_at_20_std
value: -13.5723
- type: nauc_ndcg_at_20_diff1
value: 25.7189
- type: nauc_ndcg_at_100_max
value: 6.2197000000000005
- type: nauc_ndcg_at_100_std
value: -10.5613
- type: nauc_ndcg_at_100_diff1
value: 25.407200000000003
- type: nauc_ndcg_at_1000_max
value: 6.336899999999999
- type: nauc_ndcg_at_1000_std
value: -11.2538
- type: nauc_ndcg_at_1000_diff1
value: 25.8353
- type: nauc_map_at_1_max
value: 3.4762
- type: nauc_map_at_1_std
value: -14.829899999999999
- type: nauc_map_at_1_diff1
value: 30.220200000000002
- type: nauc_map_at_3_max
value: 4.1498
- type: nauc_map_at_3_std
value: -15.659699999999999
- type: nauc_map_at_3_diff1
value: 27.6738
- type: nauc_map_at_5_max
value: 4.457599999999999
- type: nauc_map_at_5_std
value: -15.593599999999999
- type: nauc_map_at_5_diff1
value: 27.147399999999998
- type: nauc_map_at_10_max
value: 4.4191
- type: nauc_map_at_10_std
value: -15.199599999999998
- type: nauc_map_at_10_diff1
value: 26.8024
- type: nauc_map_at_20_max
value: 4.559699999999999
- type: nauc_map_at_20_std
value: -14.8687
- type: nauc_map_at_20_diff1
value: 26.929799999999997
- type: nauc_map_at_100_max
value: 4.709300000000001
- type: nauc_map_at_100_std
value: -14.430599999999998
- type: nauc_map_at_100_diff1
value: 26.895200000000003
- type: nauc_map_at_1000_max
value: 4.7146
- type: nauc_map_at_1000_std
value: -14.4381
- type: nauc_map_at_1000_diff1
value: 26.9071
- type: nauc_recall_at_1_max
value: 3.4762
- type: nauc_recall_at_1_std
value: -14.829899999999999
- type: nauc_recall_at_1_diff1
value: 30.220200000000002
- type: nauc_recall_at_3_max
value: 4.8518
- type: nauc_recall_at_3_std
value: -16.215
- type: nauc_recall_at_3_diff1
value: 25.1628
- type: nauc_recall_at_5_max
value: 5.8279
- type: nauc_recall_at_5_std
value: -15.9303
- type: nauc_recall_at_5_diff1
value: 23.544999999999998
- type: nauc_recall_at_10_max
value: 5.7948
- type: nauc_recall_at_10_std
value: -13.1624
- type: nauc_recall_at_10_diff1
value: 21.5447
- type: nauc_recall_at_20_max
value: 7.0539000000000005
- type: nauc_recall_at_20_std
value: -8.9408
- type: nauc_recall_at_20_diff1
value: 22.4027
- type: nauc_recall_at_100_max
value: 15.1651
- type: nauc_recall_at_100_std
value: 16.419
- type: nauc_recall_at_100_diff1
value: 17.897299999999998
- type: nauc_recall_at_1000_max
value: 41.646300000000004
- type: nauc_recall_at_1000_std
value: 54.791000000000004
- type: nauc_recall_at_1000_diff1
value: 16.4922
- type: nauc_precision_at_1_max
value: 3.3686000000000003
- type: nauc_precision_at_1_std
value: -14.960799999999999
- type: nauc_precision_at_1_diff1
value: 30.0257
- type: nauc_precision_at_3_max
value: 4.8638
- type: nauc_precision_at_3_std
value: -16.3
- type: nauc_precision_at_3_diff1
value: 25.1213
- type: nauc_precision_at_5_max
value: 5.8399
- type: nauc_precision_at_5_std
value: -16.1007
- type: nauc_precision_at_5_diff1
value: 23.4288
- type: nauc_precision_at_10_max
value: 6.042
- type: nauc_precision_at_10_std
value: -13.0782
- type: nauc_precision_at_10_diff1
value: 20.8509
- type: nauc_precision_at_20_max
value: 7.9528
- type: nauc_precision_at_20_std
value: -8.2321
- type: nauc_precision_at_20_diff1
value: 21.0746
- type: nauc_precision_at_100_max
value: 16.026699999999998
- type: nauc_precision_at_100_std
value: 15.112200000000001
- type: nauc_precision_at_100_diff1
value: 13.2433
- type: nauc_precision_at_1000_max
value: 24.8965
- type: nauc_precision_at_1000_std
value: 24.741
- type: nauc_precision_at_1000_diff1
value: 2.8078
- type: nauc_mrr_at_1_max
value: 3.3686000000000003
- type: nauc_mrr_at_1_std
value: -14.960799999999999
- type: nauc_mrr_at_1_diff1
value: 30.0257
- type: nauc_mrr_at_3_max
value: 3.9521
- type: nauc_mrr_at_3_std
value: -15.6591
- type: nauc_mrr_at_3_diff1
value: 27.511799999999997
- type: nauc_mrr_at_5_max
value: 4.3118
- type: nauc_mrr_at_5_std
value: -15.5244
- type: nauc_mrr_at_5_diff1
value: 27.024199999999997
- type: nauc_mrr_at_10_max
value: 4.3529
- type: nauc_mrr_at_10_std
value: -15.065100000000001
- type: nauc_mrr_at_10_diff1
value: 26.7106
- type: nauc_mrr_at_20_max
value: 4.4593
- type: nauc_mrr_at_20_std
value: -14.7683
- type: nauc_mrr_at_20_diff1
value: 26.815099999999997
- type: nauc_mrr_at_100_max
value: 4.5908999999999995
- type: nauc_mrr_at_100_std
value: -14.361099999999999
- type: nauc_mrr_at_100_diff1
value: 26.7866
- type: nauc_mrr_at_1000_max
value: 4.5903
- type: nauc_mrr_at_1000_std
value: -14.3764
- type: nauc_mrr_at_1000_diff1
value: 26.801000000000002
- type: main_score
value: 30.711
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.4505
- type: f1
value: 89.00200000000001
- type: f1_weighted
value: 89.442
- type: main_score
value: 89.4505
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 56.846799999999995
- type: f1
value: 39.2152
- type: f1_weighted
value: 58.797999999999995
- type: main_score
value: 56.846799999999995
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 64.768
- type: f1
value: 61.9285
- type: f1_weighted
value: 63.67
- type: main_score
value: 64.768
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 71.3416
- type: f1
value: 69.9576
- type: f1_weighted
value: 71.19680000000001
- type: main_score
value: 71.3416
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.5684
- type: v_measure_std
value: 1.6362999999999999
- type: main_score
value: 32.5684
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.551299999999998
- type: v_measure_std
value: 1.7208999999999999
- type: main_score
value: 31.551299999999998
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: map
value: 30.883
- type: mrr
value: 31.923299999999998
- type: nAUC_map_max
value: -20.072000000000003
- type: nAUC_map_std
value: -4.8503
- type: nAUC_map_diff1
value: 14.178099999999999
- type: nAUC_mrr_max
value: -14.7901
- type: nAUC_mrr_std
value: -2.8666
- type: nAUC_mrr_diff1
value: 13.2767
- type: main_score
value: 30.883
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: ndcg_at_1
value: 41.486000000000004
- type: ndcg_at_3
value: 39.324
- type: ndcg_at_5
value: 36.949
- type: ndcg_at_10
value: 33.737
- type: ndcg_at_20
value: 31.320999999999998
- type: ndcg_at_100
value: 30.886000000000003
- type: ndcg_at_1000
value: 40.018
- type: map_at_1
value: 5.452
- type: map_at_3
value: 9.45
- type: map_at_5
value: 10.92
- type: map_at_10
value: 12.758
- type: map_at_20
value: 14.036999999999999
- type: map_at_100
value: 15.93
- type: map_at_1000
value: 17.422
- type: recall_at_1
value: 5.452
- type: recall_at_3
value: 10.732999999999999
- type: recall_at_5
value: 13.553
- type: recall_at_10
value: 17.119999999999997
- type: recall_at_20
value: 20.459
- type: recall_at_100
value: 30.719
- type: recall_at_1000
value: 62.766
- type: precision_at_1
value: 43.344
- type: precision_at_3
value: 37.152
- type: precision_at_5
value: 31.703
- type: precision_at_10
value: 24.799
- type: precision_at_20
value: 18.142
- type: precision_at_100
value: 7.8950000000000005
- type: precision_at_1000
value: 2.091
- type: mrr_at_1
value: 43.3437
- type: mrr_at_3
value: 51.135200000000005
- type: mrr_at_5
value: 52.15689999999999
- type: mrr_at_10
value: 52.9277
- type: mrr_at_20
value: 53.2931
- type: mrr_at_100
value: 53.467200000000005
- type: mrr_at_1000
value: 53.5122
- type: nauc_ndcg_at_1_max
value: 33.6844
- type: nauc_ndcg_at_1_std
value: 17.6117
- type: nauc_ndcg_at_1_diff1
value: 37.641999999999996
- type: nauc_ndcg_at_3_max
value: 36.6302
- type: nauc_ndcg_at_3_std
value: 25.738
- type: nauc_ndcg_at_3_diff1
value: 29.8566
- type: nauc_ndcg_at_5_max
value: 39.043099999999995
- type: nauc_ndcg_at_5_std
value: 28.904999999999998
- type: nauc_ndcg_at_5_diff1
value: 26.129400000000004
- type: nauc_ndcg_at_10_max
value: 38.935199999999995
- type: nauc_ndcg_at_10_std
value: 30.338700000000003
- type: nauc_ndcg_at_10_diff1
value: 23.594
- type: nauc_ndcg_at_20_max
value: 38.2138
- type: nauc_ndcg_at_20_std
value: 31.8994
- type: nauc_ndcg_at_20_diff1
value: 21.583
- type: nauc_ndcg_at_100_max
value: 39.869
- type: nauc_ndcg_at_100_std
value: 33.591300000000004
- type: nauc_ndcg_at_100_diff1
value: 23.0398
- type: nauc_ndcg_at_1000_max
value: 44.9572
- type: nauc_ndcg_at_1000_std
value: 38.222
- type: nauc_ndcg_at_1000_diff1
value: 23.7314
- type: nauc_map_at_1_max
value: 8.0309
- type: nauc_map_at_1_std
value: -12.6861
- type: nauc_map_at_1_diff1
value: 45.5924
- type: nauc_map_at_3_max
value: 11.8264
- type: nauc_map_at_3_std
value: -7.3325000000000005
- type: nauc_map_at_3_diff1
value: 35.5714
- type: nauc_map_at_5_max
value: 15.7483
- type: nauc_map_at_5_std
value: -2.9122
- type: nauc_map_at_5_diff1
value: 32.2211
- type: nauc_map_at_10_max
value: 19.9795
- type: nauc_map_at_10_std
value: 2.6611
- type: nauc_map_at_10_diff1
value: 29.047099999999997
- type: nauc_map_at_20_max
value: 23.1754
- type: nauc_map_at_20_std
value: 8.0668
- type: nauc_map_at_20_diff1
value: 27.7477
- type: nauc_map_at_100_max
value: 26.4818
- type: nauc_map_at_100_std
value: 15.723
- type: nauc_map_at_100_diff1
value: 26.5443
- type: nauc_map_at_1000_max
value: 27.929100000000002
- type: nauc_map_at_1000_std
value: 19.81
- type: nauc_map_at_1000_diff1
value: 25.0603
- type: nauc_recall_at_1_max
value: 8.0309
- type: nauc_recall_at_1_std
value: -12.6861
- type: nauc_recall_at_1_diff1
value: 45.5924
- type: nauc_recall_at_3_max
value: 10.9894
- type: nauc_recall_at_3_std
value: -7.4279
- type: nauc_recall_at_3_diff1
value: 29.917899999999996
- type: nauc_recall_at_5_max
value: 15.7163
- type: nauc_recall_at_5_std
value: -0.8366
- type: nauc_recall_at_5_diff1
value: 22.8634
- type: nauc_recall_at_10_max
value: 19.5902
- type: nauc_recall_at_10_std
value: 5.3492
- type: nauc_recall_at_10_diff1
value: 19.4157
- type: nauc_recall_at_20_max
value: 23.1894
- type: nauc_recall_at_20_std
value: 12.8919
- type: nauc_recall_at_20_diff1
value: 17.8387
- type: nauc_recall_at_100_max
value: 30.150399999999998
- type: nauc_recall_at_100_std
value: 27.5036
- type: nauc_recall_at_100_diff1
value: 15.4935
- type: nauc_recall_at_1000_max
value: 32.404500000000006
- type: nauc_recall_at_1000_std
value: 30.7325
- type: nauc_recall_at_1000_diff1
value: 13.9299
- type: nauc_precision_at_1_max
value: 34.747699999999995
- type: nauc_precision_at_1_std
value: 17.5475
- type: nauc_precision_at_1_diff1
value: 36.0582
- type: nauc_precision_at_3_max
value: 39.8251
- type: nauc_precision_at_3_std
value: 34.3835
- type: nauc_precision_at_3_diff1
value: 19.651699999999998
- type: nauc_precision_at_5_max
value: 42.796800000000005
- type: nauc_precision_at_5_std
value: 40.083999999999996
- type: nauc_precision_at_5_diff1
value: 12.4069
- type: nauc_precision_at_10_max
value: 41.562599999999996
- type: nauc_precision_at_10_std
value: 44.7888
- type: nauc_precision_at_10_diff1
value: 5.587000000000001
- type: nauc_precision_at_20_max
value: 37.000499999999995
- type: nauc_precision_at_20_std
value: 50.4486
- type: nauc_precision_at_20_diff1
value: -0.1011
- type: nauc_precision_at_100_max
value: 24.7635
- type: nauc_precision_at_100_std
value: 51.001200000000004
- type: nauc_precision_at_100_diff1
value: -7.7414
- type: nauc_precision_at_1000_max
value: 10.837900000000001
- type: nauc_precision_at_1000_std
value: 37.2421
- type: nauc_precision_at_1000_diff1
value: -14.086599999999999
- type: nauc_mrr_at_1_max
value: 34.747699999999995
- type: nauc_mrr_at_1_std
value: 17.5475
- type: nauc_mrr_at_1_diff1
value: 36.0582
- type: nauc_mrr_at_3_max
value: 40.8392
- type: nauc_mrr_at_3_std
value: 24.9403
- type: nauc_mrr_at_3_diff1
value: 33.9575
- type: nauc_mrr_at_5_max
value: 42.2108
- type: nauc_mrr_at_5_std
value: 26.374799999999997
- type: nauc_mrr_at_5_diff1
value: 33.8034
- type: nauc_mrr_at_10_max
value: 42.180800000000005
- type: nauc_mrr_at_10_std
value: 26.6843
- type: nauc_mrr_at_10_diff1
value: 33.151
- type: nauc_mrr_at_20_max
value: 42.4685
- type: nauc_mrr_at_20_std
value: 27.1065
- type: nauc_mrr_at_20_diff1
value: 33.0052
- type: nauc_mrr_at_100_max
value: 42.417
- type: nauc_mrr_at_100_std
value: 27.069300000000002
- type: nauc_mrr_at_100_diff1
value: 33.1211
- type: nauc_mrr_at_1000_max
value: 42.3902
- type: nauc_mrr_at_1000_std
value: 27.019
- type: nauc_mrr_at_1000_diff1
value: 33.1177
- type: main_score
value: 33.737
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: ndcg_at_1
value: 32.793
- type: ndcg_at_3
value: 42.782
- type: ndcg_at_5
value: 47.554
- type: ndcg_at_10
value: 51.63100000000001
- type: ndcg_at_20
value: 54.005
- type: ndcg_at_100
value: 56.287
- type: ndcg_at_1000
value: 56.949000000000005
- type: map_at_1
value: 29.022
- type: map_at_3
value: 39.045
- type: map_at_5
value: 41.86
- type: map_at_10
value: 43.730000000000004
- type: map_at_20
value: 44.478
- type: map_at_100
value: 44.849
- type: map_at_1000
value: 44.877
- type: recall_at_1
value: 29.022
- type: recall_at_3
value: 50.40599999999999
- type: recall_at_5
value: 61.45
- type: recall_at_10
value: 73.32499999999999
- type: recall_at_20
value: 82.06099999999999
- type: recall_at_100
value: 93.455
- type: recall_at_1000
value: 98.414
- type: precision_at_1
value: 32.793
- type: precision_at_3
value: 19.583000000000002
- type: precision_at_5
value: 14.484
- type: precision_at_10
value: 8.737
- type: precision_at_20
value: 4.928
- type: precision_at_100
value: 1.134
- type: precision_at_1000
value: 0.12
- type: mrr_at_1
value: 32.821600000000004
- type: mrr_at_3
value: 42.275
- type: mrr_at_5
value: 44.7895
- type: mrr_at_10
value: 46.2574
- type: mrr_at_20
value: 46.8249
- type: mrr_at_100
value: 47.0971
- type: mrr_at_1000
value: 47.1157
- type: nauc_ndcg_at_1_max
value: 23.167299999999997
- type: nauc_ndcg_at_1_std
value: -4.5794
- type: nauc_ndcg_at_1_diff1
value: 31.1021
- type: nauc_ndcg_at_3_max
value: 27.1071
- type: nauc_ndcg_at_3_std
value: -4.8229
- type: nauc_ndcg_at_3_diff1
value: 26.442
- type: nauc_ndcg_at_5_max
value: 29.579
- type: nauc_ndcg_at_5_std
value: -3.9125
- type: nauc_ndcg_at_5_diff1
value: 26.1946
- type: nauc_ndcg_at_10_max
value: 30.6847
- type: nauc_ndcg_at_10_std
value: -2.3781
- type: nauc_ndcg_at_10_diff1
value: 25.9597
- type: nauc_ndcg_at_20_max
value: 31.4414
- type: nauc_ndcg_at_20_std
value: -0.6708000000000001
- type: nauc_ndcg_at_20_diff1
value: 25.886300000000002
- type: nauc_ndcg_at_100_max
value: 30.5333
- type: nauc_ndcg_at_100_std
value: -0.605
- type: nauc_ndcg_at_100_diff1
value: 26.3173
- type: nauc_ndcg_at_1000_max
value: 29.6714
- type: nauc_ndcg_at_1000_std
value: -1.4797
- type: nauc_ndcg_at_1000_diff1
value: 26.4662
- type: nauc_map_at_1_max
value: 22.0826
- type: nauc_map_at_1_std
value: -7.1051
- type: nauc_map_at_1_diff1
value: 31.398
- type: nauc_map_at_3_max
value: 26.0631
- type: nauc_map_at_3_std
value: -5.564100000000001
- type: nauc_map_at_3_diff1
value: 27.4542
- type: nauc_map_at_5_max
value: 27.4859
- type: nauc_map_at_5_std
value: -5.1595
- type: nauc_map_at_5_diff1
value: 27.4557
- type: nauc_map_at_10_max
value: 27.9754
- type: nauc_map_at_10_std
value: -4.4186000000000005
- type: nauc_map_at_10_diff1
value: 27.3476
- type: nauc_map_at_20_max
value: 28.168
- type: nauc_map_at_20_std
value: -3.8931
- type: nauc_map_at_20_diff1
value: 27.333800000000004
- type: nauc_map_at_100_max
value: 28.020899999999997
- type: nauc_map_at_100_std
value: -3.8826
- type: nauc_map_at_100_diff1
value: 27.411099999999998
- type: nauc_map_at_1000_max
value: 27.9917
- type: nauc_map_at_1000_std
value: -3.9068
- type: nauc_map_at_1000_diff1
value: 27.4158
- type: nauc_recall_at_1_max
value: 22.0826
- type: nauc_recall_at_1_std
value: -7.1051
- type: nauc_recall_at_1_diff1
value: 31.398
- type: nauc_recall_at_3_max
value: 29.145500000000002
- type: nauc_recall_at_3_std
value: -4.3699
- type: nauc_recall_at_3_diff1
value: 22.868
- type: nauc_recall_at_5_max
value: 35.4075
- type: nauc_recall_at_5_std
value: -2.0428
- type: nauc_recall_at_5_diff1
value: 21.4863
- type: nauc_recall_at_10_max
value: 41.0673
- type: nauc_recall_at_10_std
value: 3.6994
- type: nauc_recall_at_10_diff1
value: 19.2556
- type: nauc_recall_at_20_max
value: 50.6702
- type: nauc_recall_at_20_std
value: 16.162399999999998
- type: nauc_recall_at_20_diff1
value: 16.9676
- type: nauc_recall_at_100_max
value: 64.5925
- type: nauc_recall_at_100_std
value: 42.2234
- type: nauc_recall_at_100_diff1
value: 12.741
- type: nauc_recall_at_1000_max
value: 66.29310000000001
- type: nauc_recall_at_1000_std
value: 61.5236
- type: nauc_recall_at_1000_diff1
value: -6.1148
- type: nauc_precision_at_1_max
value: 23.167299999999997
- type: nauc_precision_at_1_std
value: -4.5794
- type: nauc_precision_at_1_diff1
value: 31.1021
- type: nauc_precision_at_3_max
value: 28.3464
- type: nauc_precision_at_3_std
value: -0.0571
- type: nauc_precision_at_3_diff1
value: 18.987399999999997
- type: nauc_precision_at_5_max
value: 30.9637
- type: nauc_precision_at_5_std
value: 2.3625
- type: nauc_precision_at_5_diff1
value: 15.912299999999998
- type: nauc_precision_at_10_max
value: 28.3203
- type: nauc_precision_at_10_std
value: 8.2947
- type: nauc_precision_at_10_diff1
value: 10.066899999999999
- type: nauc_precision_at_20_max
value: 26.2198
- type: nauc_precision_at_20_std
value: 15.4182
- type: nauc_precision_at_20_diff1
value: 5.0011
- type: nauc_precision_at_100_max
value: 12.721599999999999
- type: nauc_precision_at_100_std
value: 18.2616
- type: nauc_precision_at_100_diff1
value: -1.5249000000000001
- type: nauc_precision_at_1000_max
value: 1.514
- type: nauc_precision_at_1000_std
value: 12.6332
- type: nauc_precision_at_1000_diff1
value: -4.8346
- type: nauc_mrr_at_1_max
value: 23.3079
- type: nauc_mrr_at_1_std
value: -4.6507
- type: nauc_mrr_at_1_diff1
value: 31.014999999999997
- type: nauc_mrr_at_3_max
value: 26.371299999999998
- type: nauc_mrr_at_3_std
value: -3.6183
- type: nauc_mrr_at_3_diff1
value: 27.5342
- type: nauc_mrr_at_5_max
value: 27.4604
- type: nauc_mrr_at_5_std
value: -2.9482
- type: nauc_mrr_at_5_diff1
value: 27.308100000000003
- type: nauc_mrr_at_10_max
value: 27.6781
- type: nauc_mrr_at_10_std
value: -2.5515
- type: nauc_mrr_at_10_diff1
value: 27.338
- type: nauc_mrr_at_20_max
value: 27.760099999999998
- type: nauc_mrr_at_20_std
value: -2.2787
- type: nauc_mrr_at_20_diff1
value: 27.372200000000003
- type: nauc_mrr_at_100_max
value: 27.6611
- type: nauc_mrr_at_100_std
value: -2.3218
- type: nauc_mrr_at_100_diff1
value: 27.444000000000003
- type: nauc_mrr_at_1000_max
value: 27.6393
- type: nauc_mrr_at_1000_std
value: -2.3404000000000003
- type: nauc_mrr_at_1000_diff1
value: 27.4444
- type: main_score
value: 51.63100000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: ndcg_at_1
value: 79.36999999999999
- type: ndcg_at_3
value: 83.545
- type: ndcg_at_5
value: 85.32
- type: ndcg_at_10
value: 86.696
- type: ndcg_at_20
value: 87.46199999999999
- type: ndcg_at_100
value: 88.103
- type: ndcg_at_1000
value: 88.252
- type: map_at_1
value: 68.961
- type: map_at_3
value: 79.616
- type: map_at_5
value: 81.54
- type: map_at_10
value: 82.65400000000001
- type: map_at_20
value: 83.098
- type: map_at_100
value: 83.33
- type: map_at_1000
value: 83.34899999999999
- type: recall_at_1
value: 68.961
- type: recall_at_3
value: 85.501
- type: recall_at_5
value: 90.379
- type: recall_at_10
value: 94.407
- type: recall_at_20
value: 96.86399999999999
- type: recall_at_100
value: 99.226
- type: recall_at_1000
value: 99.958
- type: precision_at_1
value: 79.36999999999999
- type: precision_at_3
value: 36.35
- type: precision_at_5
value: 24.048
- type: precision_at_10
value: 13.145000000000001
- type: precision_at_20
value: 7.007
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.156
- type: mrr_at_1
value: 79.3
- type: mrr_at_3
value: 84.82169999999999
- type: mrr_at_5
value: 85.6047
- type: mrr_at_10
value: 85.94500000000001
- type: mrr_at_20
value: 86.0381
- type: mrr_at_100
value: 86.0694
- type: mrr_at_1000
value: 86.0712
- type: nauc_ndcg_at_1_max
value: 37.962
- type: nauc_ndcg_at_1_std
value: -32.129999999999995
- type: nauc_ndcg_at_1_diff1
value: 76.2543
- type: nauc_ndcg_at_3_max
value: 36.5568
- type: nauc_ndcg_at_3_std
value: -36.9639
- type: nauc_ndcg_at_3_diff1
value: 74.33229999999999
- type: nauc_ndcg_at_5_max
value: 36.6236
- type: nauc_ndcg_at_5_std
value: -38.3823
- type: nauc_ndcg_at_5_diff1
value: 74.8725
- type: nauc_ndcg_at_10_max
value: 37.2726
- type: nauc_ndcg_at_10_std
value: -37.6889
- type: nauc_ndcg_at_10_diff1
value: 75.437
- type: nauc_ndcg_at_20_max
value: 37.3643
- type: nauc_ndcg_at_20_std
value: -36.4545
- type: nauc_ndcg_at_20_diff1
value: 75.3032
- type: nauc_ndcg_at_100_max
value: 37.701
- type: nauc_ndcg_at_100_std
value: -34.6794
- type: nauc_ndcg_at_100_diff1
value: 75.1545
- type: nauc_ndcg_at_1000_max
value: 37.7386
- type: nauc_ndcg_at_1000_std
value: -34.659099999999995
- type: nauc_ndcg_at_1000_diff1
value: 75.1303
- type: nauc_map_at_1_max
value: 28.3786
- type: nauc_map_at_1_std
value: -34.4402
- type: nauc_map_at_1_diff1
value: 78.58579999999999
- type: nauc_map_at_3_max
value: 34.1617
- type: nauc_map_at_3_std
value: -39.0191
- type: nauc_map_at_3_diff1
value: 75.551
- type: nauc_map_at_5_max
value: 35.2348
- type: nauc_map_at_5_std
value: -39.352399999999996
- type: nauc_map_at_5_diff1
value: 75.45530000000001
- type: nauc_map_at_10_max
value: 36.0009
- type: nauc_map_at_10_std
value: -38.389
- type: nauc_map_at_10_diff1
value: 75.523
- type: nauc_map_at_20_max
value: 36.167300000000004
- type: nauc_map_at_20_std
value: -37.5191
- type: nauc_map_at_20_diff1
value: 75.3798
- type: nauc_map_at_100_max
value: 36.2928
- type: nauc_map_at_100_std
value: -36.8001
- type: nauc_map_at_100_diff1
value: 75.2957
- type: nauc_map_at_1000_max
value: 36.3027
- type: nauc_map_at_1000_std
value: -36.7641
- type: nauc_map_at_1000_diff1
value: 75.29090000000001
- type: nauc_recall_at_1_max
value: 28.3786
- type: nauc_recall_at_1_std
value: -34.4402
- type: nauc_recall_at_1_diff1
value: 78.58579999999999
- type: nauc_recall_at_3_max
value: 32.1082
- type: nauc_recall_at_3_std
value: -43.2936
- type: nauc_recall_at_3_diff1
value: 71.4939
- type: nauc_recall_at_5_max
value: 32.590599999999995
- type: nauc_recall_at_5_std
value: -48.7416
- type: nauc_recall_at_5_diff1
value: 70.7945
- type: nauc_recall_at_10_max
value: 34.755
- type: nauc_recall_at_10_std
value: -49.398599999999995
- type: nauc_recall_at_10_diff1
value: 71.87219999999999
- type: nauc_recall_at_20_max
value: 33.879999999999995
- type: nauc_recall_at_20_std
value: -45.1325
- type: nauc_recall_at_20_diff1
value: 71.3805
- type: nauc_recall_at_100_max
value: 37.4684
- type: nauc_recall_at_100_std
value: -13.0134
- type: nauc_recall_at_100_diff1
value: 69.963
- type: nauc_recall_at_1000_max
value: 31.6199
- type: nauc_recall_at_1000_std
value: 59.0228
- type: nauc_recall_at_1000_diff1
value: 60.9687
- type: nauc_precision_at_1_max
value: 37.962
- type: nauc_precision_at_1_std
value: -32.129999999999995
- type: nauc_precision_at_1_diff1
value: 76.2543
- type: nauc_precision_at_3_max
value: 11.419799999999999
- type: nauc_precision_at_3_std
value: 2.5604999999999998
- type: nauc_precision_at_3_diff1
value: -11.505799999999999
- type: nauc_precision_at_5_max
value: 4.454700000000001
- type: nauc_precision_at_5_std
value: 11.6986
- type: nauc_precision_at_5_diff1
value: -26.2868
- type: nauc_precision_at_10_max
value: -0.4261
- type: nauc_precision_at_10_std
value: 20.7877
- type: nauc_precision_at_10_diff1
value: -34.5624
- type: nauc_precision_at_20_max
value: -3.7817000000000003
- type: nauc_precision_at_20_std
value: 27.056599999999996
- type: nauc_precision_at_20_diff1
value: -39.0052
- type: nauc_precision_at_100_max
value: -6.4321
- type: nauc_precision_at_100_std
value: 33.1245
- type: nauc_precision_at_100_diff1
value: -41.9135
- type: nauc_precision_at_1000_max
value: -7.100199999999999
- type: nauc_precision_at_1000_std
value: 34.0081
- type: nauc_precision_at_1000_diff1
value: -42.556
- type: nauc_mrr_at_1_max
value: 37.754
- type: nauc_mrr_at_1_std
value: -32.2644
- type: nauc_mrr_at_1_diff1
value: 76.4182
- type: nauc_mrr_at_3_max
value: 38.7583
- type: nauc_mrr_at_3_std
value: -33.631699999999995
- type: nauc_mrr_at_3_diff1
value: 75.30369999999999
- type: nauc_mrr_at_5_max
value: 38.675399999999996
- type: nauc_mrr_at_5_std
value: -33.873
- type: nauc_mrr_at_5_diff1
value: 75.58890000000001
- type: nauc_mrr_at_10_max
value: 38.7962
- type: nauc_mrr_at_10_std
value: -33.5451
- type: nauc_mrr_at_10_diff1
value: 75.7153
- type: nauc_mrr_at_20_max
value: 38.7213
- type: nauc_mrr_at_20_std
value: -33.433600000000006
- type: nauc_mrr_at_20_diff1
value: 75.6934
- type: nauc_mrr_at_100_max
value: 38.6943
- type: nauc_mrr_at_100_std
value: -33.4013
- type: nauc_mrr_at_100_diff1
value: 75.6932
- type: nauc_mrr_at_1000_max
value: 38.6928
- type: nauc_mrr_at_1000_std
value: -33.4051
- type: nauc_mrr_at_1000_diff1
value: 75.69369999999999
- type: main_score
value: 86.696
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 50.019999999999996
- type: v_measure_std
value: 4.5914
- type: main_score
value: 50.019999999999996
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 53.9756
- type: v_measure_std
value: 11.6573
- type: main_score
value: 53.9756
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: ndcg_at_1
value: 24.6
- type: ndcg_at_3
value: 20.896
- type: ndcg_at_5
value: 18.497
- type: ndcg_at_10
value: 22.542
- type: ndcg_at_20
value: 25.812
- type: ndcg_at_100
value: 32.326
- type: ndcg_at_1000
value: 38.279999999999994
- type: map_at_1
value: 4.988
- type: map_at_3
value: 9.439
- type: map_at_5
value: 11.459999999999999
- type: map_at_10
value: 13.553
- type: map_at_20
value: 14.767
- type: map_at_100
value: 16.136
- type: map_at_1000
value: 16.512
- type: recall_at_1
value: 4.988
- type: recall_at_3
value: 12.046999999999999
- type: recall_at_5
value: 16.777
- type: recall_at_10
value: 24.212
- type: recall_at_20
value: 31.885
- type: recall_at_100
value: 53.105000000000004
- type: recall_at_1000
value: 82.02199999999999
- type: precision_at_1
value: 24.6
- type: precision_at_3
value: 19.8
- type: precision_at_5
value: 16.54
- type: precision_at_10
value: 11.940000000000001
- type: precision_at_20
value: 7.865
- type: precision_at_100
value: 2.616
- type: precision_at_1000
value: 0.404
- type: mrr_at_1
value: 24.6
- type: mrr_at_3
value: 33.1167
- type: mrr_at_5
value: 35.1717
- type: mrr_at_10
value: 36.7925
- type: mrr_at_20
value: 37.5284
- type: mrr_at_100
value: 37.9725
- type: mrr_at_1000
value: 38.0112
- type: nauc_ndcg_at_1_max
value: 17.8923
- type: nauc_ndcg_at_1_std
value: 9.1225
- type: nauc_ndcg_at_1_diff1
value: 22.665399999999998
- type: nauc_ndcg_at_3_max
value: 23.6866
- type: nauc_ndcg_at_3_std
value: 15.3093
- type: nauc_ndcg_at_3_diff1
value: 17.589299999999998
- type: nauc_ndcg_at_5_max
value: 25.3398
- type: nauc_ndcg_at_5_std
value: 18.002299999999998
- type: nauc_ndcg_at_5_diff1
value: 16.8155
- type: nauc_ndcg_at_10_max
value: 28.057399999999998
- type: nauc_ndcg_at_10_std
value: 22.7388
- type: nauc_ndcg_at_10_diff1
value: 16.0553
- type: nauc_ndcg_at_20_max
value: 28.9134
- type: nauc_ndcg_at_20_std
value: 25.389
- type: nauc_ndcg_at_20_diff1
value: 15.7728
- type: nauc_ndcg_at_100_max
value: 29.9553
- type: nauc_ndcg_at_100_std
value: 29.8607
- type: nauc_ndcg_at_100_diff1
value: 15.526100000000001
- type: nauc_ndcg_at_1000_max
value: 29.088399999999996
- type: nauc_ndcg_at_1000_std
value: 29.2896
- type: nauc_ndcg_at_1000_diff1
value: 15.2143
- type: nauc_map_at_1_max
value: 17.9628
- type: nauc_map_at_1_std
value: 8.9923
- type: nauc_map_at_1_diff1
value: 22.7227
- type: nauc_map_at_3_max
value: 24.012700000000002
- type: nauc_map_at_3_std
value: 15.1908
- type: nauc_map_at_3_diff1
value: 17.7637
- type: nauc_map_at_5_max
value: 25.0497
- type: nauc_map_at_5_std
value: 17.366300000000003
- type: nauc_map_at_5_diff1
value: 16.1512
- type: nauc_map_at_10_max
value: 26.777299999999997
- type: nauc_map_at_10_std
value: 21.0365
- type: nauc_map_at_10_diff1
value: 15.0999
- type: nauc_map_at_20_max
value: 27.6561
- type: nauc_map_at_20_std
value: 23.031399999999998
- type: nauc_map_at_20_diff1
value: 14.935300000000002
- type: nauc_map_at_100_max
value: 28.015800000000002
- type: nauc_map_at_100_std
value: 24.840899999999998
- type: nauc_map_at_100_diff1
value: 14.9355
- type: nauc_map_at_1000_max
value: 27.9646
- type: nauc_map_at_1000_std
value: 24.9601
- type: nauc_map_at_1000_diff1
value: 14.886
- type: nauc_recall_at_1_max
value: 17.9628
- type: nauc_recall_at_1_std
value: 8.9923
- type: nauc_recall_at_1_diff1
value: 22.7227
- type: nauc_recall_at_3_max
value: 25.008399999999998
- type: nauc_recall_at_3_std
value: 17.1697
- type: nauc_recall_at_3_diff1
value: 15.1082
- type: nauc_recall_at_5_max
value: 26.4345
- type: nauc_recall_at_5_std
value: 20.7923
- type: nauc_recall_at_5_diff1
value: 13.58
- type: nauc_recall_at_10_max
value: 29.5057
- type: nauc_recall_at_10_std
value: 27.8646
- type: nauc_recall_at_10_diff1
value: 11.8098
- type: nauc_recall_at_20_max
value: 29.3419
- type: nauc_recall_at_20_std
value: 31.6086
- type: nauc_recall_at_20_diff1
value: 10.6491
- type: nauc_recall_at_100_max
value: 28.8421
- type: nauc_recall_at_100_std
value: 40.2696
- type: nauc_recall_at_100_diff1
value: 8.1461
- type: nauc_recall_at_1000_max
value: 22.8234
- type: nauc_recall_at_1000_std
value: 41.6117
- type: nauc_recall_at_1000_diff1
value: 1.8689999999999998
- type: nauc_precision_at_1_max
value: 17.8923
- type: nauc_precision_at_1_std
value: 9.1225
- type: nauc_precision_at_1_diff1
value: 22.665399999999998
- type: nauc_precision_at_3_max
value: 25.1067
- type: nauc_precision_at_3_std
value: 17.4066
- type: nauc_precision_at_3_diff1
value: 15.0583
- type: nauc_precision_at_5_max
value: 26.6005
- type: nauc_precision_at_5_std
value: 20.9158
- type: nauc_precision_at_5_diff1
value: 13.591700000000001
- type: nauc_precision_at_10_max
value: 29.8091
- type: nauc_precision_at_10_std
value: 28.0069
- type: nauc_precision_at_10_diff1
value: 11.675699999999999
- type: nauc_precision_at_20_max
value: 29.5651
- type: nauc_precision_at_20_std
value: 31.439899999999998
- type: nauc_precision_at_20_diff1
value: 10.4784
- type: nauc_precision_at_100_max
value: 28.853299999999997
- type: nauc_precision_at_100_std
value: 39.3115
- type: nauc_precision_at_100_diff1
value: 7.6562
- type: nauc_precision_at_1000_max
value: 23.025599999999997
- type: nauc_precision_at_1000_std
value: 38.554300000000005
- type: nauc_precision_at_1000_diff1
value: 1.3502999999999998
- type: nauc_mrr_at_1_max
value: 17.8923
- type: nauc_mrr_at_1_std
value: 9.1225
- type: nauc_mrr_at_1_diff1
value: 22.665399999999998
- type: nauc_mrr_at_3_max
value: 21.2588
- type: nauc_mrr_at_3_std
value: 12.7528
- type: nauc_mrr_at_3_diff1
value: 19.808999999999997
- type: nauc_mrr_at_5_max
value: 22.572200000000002
- type: nauc_mrr_at_5_std
value: 14.210500000000001
- type: nauc_mrr_at_5_diff1
value: 20.502000000000002
- type: nauc_mrr_at_10_max
value: 23.372799999999998
- type: nauc_mrr_at_10_std
value: 15.1215
- type: nauc_mrr_at_10_diff1
value: 20.8449
- type: nauc_mrr_at_20_max
value: 23.017599999999998
- type: nauc_mrr_at_20_std
value: 15.0391
- type: nauc_mrr_at_20_diff1
value: 20.8233
- type: nauc_mrr_at_100_max
value: 22.8993
- type: nauc_mrr_at_100_std
value: 14.8474
- type: nauc_mrr_at_100_diff1
value: 20.8759
- type: nauc_mrr_at_1000_max
value: 22.8744
- type: nauc_mrr_at_1000_std
value: 14.8178
- type: nauc_mrr_at_1000_diff1
value: 20.8635
- type: main_score
value: 22.542
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: pearson
value: 77.4874
- type: spearman
value: 68.79809999999999
- type: cosine_pearson
value: 77.4874
- type: cosine_spearman
value: 68.79809999999999
- type: manhattan_pearson
value: 73.3583
- type: manhattan_spearman
value: 68.6911
- type: euclidean_pearson
value: 73.82039999999999
- type: euclidean_spearman
value: 68.79809999999999
- type: main_score
value: 68.79809999999999
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: pearson
value: 67.8391
- type: spearman
value: 64.77380000000001
- type: cosine_pearson
value: 67.8391
- type: cosine_spearman
value: 64.77380000000001
- type: manhattan_pearson
value: 64.7258
- type: manhattan_spearman
value: 64.1558
- type: euclidean_pearson
value: 65.68469999999999
- type: euclidean_spearman
value: 64.7722
- type: main_score
value: 64.77380000000001
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: pearson
value: 78.8177
- type: spearman
value: 79.3253
- type: cosine_pearson
value: 78.8177
- type: cosine_spearman
value: 79.3253
- type: manhattan_pearson
value: 78.6048
- type: manhattan_spearman
value: 79.1874
- type: euclidean_pearson
value: 78.71010000000001
- type: euclidean_spearman
value: 79.3253
- type: main_score
value: 79.3253
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: pearson
value: 75.6791
- type: spearman
value: 70.1701
- type: cosine_pearson
value: 75.6791
- type: cosine_spearman
value: 70.1701
- type: manhattan_pearson
value: 73.85239999999999
- type: manhattan_spearman
value: 69.9223
- type: euclidean_pearson
value: 74.143
- type: euclidean_spearman
value: 70.1701
- type: main_score
value: 70.1701
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: pearson
value: 80.4413
- type: spearman
value: 82.0343
- type: cosine_pearson
value: 80.4413
- type: cosine_spearman
value: 82.0343
- type: manhattan_pearson
value: 81.3627
- type: manhattan_spearman
value: 81.8838
- type: euclidean_pearson
value: 81.47569999999999
- type: euclidean_spearman
value: 82.0343
- type: main_score
value: 82.0343
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: pearson
value: 77.172
- type: spearman
value: 78.9633
- type: cosine_pearson
value: 77.172
- type: cosine_spearman
value: 78.9633
- type: manhattan_pearson
value: 78.35849999999999
- type: manhattan_spearman
value: 78.7975
- type: euclidean_pearson
value: 78.5236
- type: euclidean_spearman
value: 78.9633
- type: main_score
value: 78.9633
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 83.5117
- type: spearman
value: 84.64970000000001
- type: cosine_pearson
value: 83.5117
- type: cosine_spearman
value: 84.64970000000001
- type: manhattan_pearson
value: 84.5137
- type: manhattan_spearman
value: 84.7848
- type: euclidean_pearson
value: 84.531
- type: euclidean_spearman
value: 84.64970000000001
- type: main_score
value: 84.64970000000001
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 29.0052
- type: spearman
value: 30.640299999999996
- type: cosine_pearson
value: 29.0052
- type: cosine_spearman
value: 30.640299999999996
- type: manhattan_pearson
value: 25.988099999999996
- type: manhattan_spearman
value: 26.935399999999998
- type: euclidean_pearson
value: 28.5366
- type: euclidean_spearman
value: 30.640299999999996
- type: main_score
value: 30.640299999999996
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 42.0755
- type: spearman
value: 39.763999999999996
- type: cosine_pearson
value: 42.0755
- type: cosine_spearman
value: 39.763999999999996
- type: manhattan_pearson
value: 40.872
- type: manhattan_spearman
value: 38.4749
- type: euclidean_pearson
value: 42.051500000000004
- type: euclidean_spearman
value: 39.7565
- type: main_score
value: 39.763999999999996
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 44.2318
- type: spearman
value: 46.5518
- type: cosine_pearson
value: 44.2318
- type: cosine_spearman
value: 46.5518
- type: manhattan_pearson
value: 43.396699999999996
- type: manhattan_spearman
value: 46.1132
- type: euclidean_pearson
value: 43.993500000000004
- type: euclidean_spearman
value: 46.5518
- type: main_score
value: 46.5518
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 36.716100000000004
- type: spearman
value: 34.6968
- type: cosine_pearson
value: 36.716100000000004
- type: cosine_spearman
value: 34.6968
- type: manhattan_pearson
value: 35.1918
- type: manhattan_spearman
value: 33.3692
- type: euclidean_pearson
value: 36.3921
- type: euclidean_spearman
value: 34.6968
- type: main_score
value: 34.6968
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 21.2825
- type: spearman
value: 17.6922
- type: cosine_pearson
value: 21.2825
- type: cosine_spearman
value: 17.6922
- type: manhattan_pearson
value: 19.491
- type: manhattan_spearman
value: 15.989700000000001
- type: euclidean_pearson
value: 21.583
- type: euclidean_spearman
value: 17.6922
- type: main_score
value: 17.6922
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 32.1584
- type: spearman
value: 27.9254
- type: cosine_pearson
value: 32.1584
- type: cosine_spearman
value: 27.9254
- type: manhattan_pearson
value: 34.2047
- type: manhattan_spearman
value: 31.1955
- type: euclidean_pearson
value: 32.4369
- type: euclidean_spearman
value: 27.9254
- type: main_score
value: 27.9254
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: pearson
value: 21.0842
- type: spearman
value: 18.5115
- type: cosine_pearson
value: 21.0842
- type: cosine_spearman
value: 18.5115
- type: manhattan_pearson
value: 23.5904
- type: manhattan_spearman
value: 21.032400000000003
- type: euclidean_pearson
value: 21.2805
- type: euclidean_spearman
value: 18.5115
- type: main_score
value: 18.5115
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 66.9563
- type: spearman
value: 67.4747
- type: cosine_pearson
value: 66.9563
- type: cosine_spearman
value: 67.4747
- type: manhattan_pearson
value: 68.32629999999999
- type: manhattan_spearman
value: 66.8163
- type: euclidean_pearson
value: 68.731
- type: euclidean_spearman
value: 67.4747
- type: main_score
value: 67.4747
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 56.3095
- type: spearman
value: 54.1005
- type: cosine_pearson
value: 56.3095
- type: cosine_spearman
value: 54.1005
- type: manhattan_pearson
value: 59.4023
- type: manhattan_spearman
value: 52.6259
- type: euclidean_pearson
value: 58.6527
- type: euclidean_spearman
value: 54.1005
- type: main_score
value: 54.1005
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 62.0575
- type: spearman
value: 66.9527
- type: cosine_pearson
value: 62.0575
- type: cosine_spearman
value: 66.9527
- type: manhattan_pearson
value: 62.648700000000005
- type: manhattan_spearman
value: 65.6446
- type: euclidean_pearson
value: 63.546800000000005
- type: euclidean_spearman
value: 66.9527
- type: main_score
value: 66.9527
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 68.42439999999999
- type: spearman
value: 69.0444
- type: cosine_pearson
value: 68.42439999999999
- type: cosine_spearman
value: 69.0444
- type: manhattan_pearson
value: 65.1492
- type: manhattan_spearman
value: 65.2364
- type: euclidean_pearson
value: 68.4923
- type: euclidean_spearman
value: 69.0444
- type: main_score
value: 69.0444
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: pearson
value: 34.164699999999996
- type: spearman
value: 36.1776
- type: cosine_pearson
value: 34.164699999999996
- type: cosine_spearman
value: 36.1776
- type: manhattan_pearson
value: 33.0685
- type: manhattan_spearman
value: 34.4054
- type: euclidean_pearson
value: 34.1002
- type: euclidean_spearman
value: 36.1776
- type: main_score
value: 36.1776
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: pearson
value: 78.0802
- type: spearman
value: 78.0444
- type: cosine_pearson
value: 78.0802
- type: cosine_spearman
value: 78.0444
- type: manhattan_pearson
value: 78.0703
- type: manhattan_spearman
value: 77.681
- type: euclidean_pearson
value: 78.4998
- type: euclidean_spearman
value: 78.0444
- type: main_score
value: 78.0444
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.4489
- type: mrr
value: 96.0178
- type: nAUC_map_max
value: 49.2333
- type: nAUC_map_std
value: 63.6541
- type: nAUC_map_diff1
value: 0.40959999999999996
- type: nAUC_mrr_max
value: 83.6216
- type: nAUC_mrr_std
value: 76.7559
- type: nAUC_mrr_diff1
value: 42.9429
- type: main_score
value: 86.4489
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: ndcg_at_1
value: 59.333000000000006
- type: ndcg_at_3
value: 65.793
- type: ndcg_at_5
value: 69.429
- type: ndcg_at_10
value: 71.27
- type: ndcg_at_20
value: 72.929
- type: ndcg_at_100
value: 73.88900000000001
- type: ndcg_at_1000
value: 74.41
- type: map_at_1
value: 56.577999999999996
- type: map_at_3
value: 63.416
- type: map_at_5
value: 65.77
- type: map_at_10
value: 66.725
- type: map_at_20
value: 67.24799999999999
- type: map_at_100
value: 67.379
- type: map_at_1000
value: 67.4
- type: recall_at_1
value: 56.577999999999996
- type: recall_at_3
value: 70.072
- type: recall_at_5
value: 79.011
- type: recall_at_10
value: 84.2
- type: recall_at_20
value: 90.5
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 99.667
- type: precision_at_1
value: 59.333000000000006
- type: precision_at_3
value: 25.556
- type: precision_at_5
value: 17.666999999999998
- type: precision_at_10
value: 9.6
- type: precision_at_20
value: 5.167
- type: precision_at_100
value: 1.087
- type: precision_at_1000
value: 0.11299999999999999
- type: mrr_at_1
value: 59.3333
- type: mrr_at_3
value: 64.9444
- type: mrr_at_5
value: 66.9278
- type: mrr_at_10
value: 67.5327
- type: mrr_at_20
value: 67.9354
- type: mrr_at_100
value: 68.0616
- type: mrr_at_1000
value: 68.08239999999999
- type: nauc_ndcg_at_1_max
value: 62.536199999999994
- type: nauc_ndcg_at_1_std
value: 4.3275
- type: nauc_ndcg_at_1_diff1
value: 78.2294
- type: nauc_ndcg_at_3_max
value: 63.0626
- type: nauc_ndcg_at_3_std
value: 6.0584
- type: nauc_ndcg_at_3_diff1
value: 74.4931
- type: nauc_ndcg_at_5_max
value: 64.73989999999999
- type: nauc_ndcg_at_5_std
value: 5.6514
- type: nauc_ndcg_at_5_diff1
value: 73.5498
- type: nauc_ndcg_at_10_max
value: 65.43090000000001
- type: nauc_ndcg_at_10_std
value: 9.1274
- type: nauc_ndcg_at_10_diff1
value: 72.4814
- type: nauc_ndcg_at_20_max
value: 65.7156
- type: nauc_ndcg_at_20_std
value: 9.9385
- type: nauc_ndcg_at_20_diff1
value: 73.0996
- type: nauc_ndcg_at_100_max
value: 65.5687
- type: nauc_ndcg_at_100_std
value: 8.818299999999999
- type: nauc_ndcg_at_100_diff1
value: 73.6361
- type: nauc_ndcg_at_1000_max
value: 65.1956
- type: nauc_ndcg_at_1000_std
value: 8.4772
- type: nauc_ndcg_at_1000_diff1
value: 74.0393
- type: nauc_map_at_1_max
value: 58.2314
- type: nauc_map_at_1_std
value: -2.7946
- type: nauc_map_at_1_diff1
value: 78.24940000000001
- type: nauc_map_at_3_max
value: 61.364200000000004
- type: nauc_map_at_3_std
value: 2.7072
- type: nauc_map_at_3_diff1
value: 75.4798
- type: nauc_map_at_5_max
value: 63.1297
- type: nauc_map_at_5_std
value: 3.9505
- type: nauc_map_at_5_diff1
value: 74.9693
- type: nauc_map_at_10_max
value: 63.6643
- type: nauc_map_at_10_std
value: 5.8328999999999995
- type: nauc_map_at_10_diff1
value: 74.5464
- type: nauc_map_at_20_max
value: 63.8666
- type: nauc_map_at_20_std
value: 6.1967
- type: nauc_map_at_20_diff1
value: 74.7224
- type: nauc_map_at_100_max
value: 63.8254
- type: nauc_map_at_100_std
value: 6.0627
- type: nauc_map_at_100_diff1
value: 74.791
- type: nauc_map_at_1000_max
value: 63.811499999999995
- type: nauc_map_at_1000_std
value: 6.0484
- type: nauc_map_at_1000_diff1
value: 74.807
- type: nauc_recall_at_1_max
value: 58.2314
- type: nauc_recall_at_1_std
value: -2.7946
- type: nauc_recall_at_1_diff1
value: 78.24940000000001
- type: nauc_recall_at_3_max
value: 61.132299999999994
- type: nauc_recall_at_3_std
value: 6.1988
- type: nauc_recall_at_3_diff1
value: 70.7273
- type: nauc_recall_at_5_max
value: 66.542
- type: nauc_recall_at_5_std
value: 5.7653
- type: nauc_recall_at_5_diff1
value: 66.4586
- type: nauc_recall_at_10_max
value: 69.3605
- type: nauc_recall_at_10_std
value: 19.6237
- type: nauc_recall_at_10_diff1
value: 60.2814
- type: nauc_recall_at_20_max
value: 72.6154
- type: nauc_recall_at_20_std
value: 31.3504
- type: nauc_recall_at_20_diff1
value: 58.8899
- type: nauc_recall_at_100_max
value: 78.6002
- type: nauc_recall_at_100_std
value: 26.484999999999996
- type: nauc_recall_at_100_diff1
value: 56.4605
- type: nauc_recall_at_1000_max
value: 55.415499999999994
- type: nauc_recall_at_1000_std
value: 72.2222
- type: nauc_recall_at_1000_diff1
value: 35.8077
- type: nauc_precision_at_1_max
value: 62.536199999999994
- type: nauc_precision_at_1_std
value: 4.3275
- type: nauc_precision_at_1_diff1
value: 78.2294
- type: nauc_precision_at_3_max
value: 53.5524
- type: nauc_precision_at_3_std
value: 23.5724
- type: nauc_precision_at_3_diff1
value: 47.5389
- type: nauc_precision_at_5_max
value: 49.1594
- type: nauc_precision_at_5_std
value: 32.3563
- type: nauc_precision_at_5_diff1
value: 28.2105
- type: nauc_precision_at_10_max
value: 41.955799999999996
- type: nauc_precision_at_10_std
value: 44.039699999999996
- type: nauc_precision_at_10_diff1
value: 12.0187
- type: nauc_precision_at_20_max
value: 34.2442
- type: nauc_precision_at_20_std
value: 50.204899999999995
- type: nauc_precision_at_20_diff1
value: -0.1954
- type: nauc_precision_at_100_max
value: 26.8264
- type: nauc_precision_at_100_std
value: 51.4247
- type: nauc_precision_at_100_diff1
value: -11.9827
- type: nauc_precision_at_1000_max
value: 17.467
- type: nauc_precision_at_1000_std
value: 56.435100000000006
- type: nauc_precision_at_1000_diff1
value: -24.2103
- type: nauc_mrr_at_1_max
value: 62.536199999999994
- type: nauc_mrr_at_1_std
value: 4.3275
- type: nauc_mrr_at_1_diff1
value: 78.2294
- type: nauc_mrr_at_3_max
value: 64.5911
- type: nauc_mrr_at_3_std
value: 7.8005
- type: nauc_mrr_at_3_diff1
value: 75.82140000000001
- type: nauc_mrr_at_5_max
value: 65.1643
- type: nauc_mrr_at_5_std
value: 7.258100000000001
- type: nauc_mrr_at_5_diff1
value: 75.2062
- type: nauc_mrr_at_10_max
value: 65.3198
- type: nauc_mrr_at_10_std
value: 8.2173
- type: nauc_mrr_at_10_diff1
value: 74.9449
- type: nauc_mrr_at_20_max
value: 65.2169
- type: nauc_mrr_at_20_std
value: 8.115400000000001
- type: nauc_mrr_at_20_diff1
value: 75.1765
- type: nauc_mrr_at_100_max
value: 65.1744
- type: nauc_mrr_at_100_std
value: 7.994700000000001
- type: nauc_mrr_at_100_diff1
value: 75.2388
- type: nauc_mrr_at_1000_max
value: 65.1615
- type: nauc_mrr_at_1000_std
value: 7.9817
- type: nauc_mrr_at_1000_diff1
value: 75.2553
- type: main_score
value: 71.27
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: similarity_accuracy
value: 99.7604
- type: similarity_accuracy_threshold
value: 84.88210000000001
- type: similarity_f1
value: 87.86359999999999
- type: similarity_f1_threshold
value: 84.88210000000001
- type: similarity_precision
value: 88.1288
- type: similarity_recall
value: 87.6
- type: similarity_ap
value: 94.07140000000001
- type: cosine_accuracy
value: 99.7604
- type: cosine_accuracy_threshold
value: 84.88210000000001
- type: cosine_f1
value: 87.86359999999999
- type: cosine_f1_threshold
value: 84.88210000000001
- type: cosine_precision
value: 88.1288
- type: cosine_recall
value: 87.6
- type: cosine_ap
value: 94.07140000000001
- type: manhattan_accuracy
value: 99.7644
- type: manhattan_accuracy_threshold
value: 829.5789
- type: manhattan_f1
value: 87.92320000000001
- type: manhattan_f1_threshold
value: 840.6424
- type: manhattan_precision
value: 88.86619999999999
- type: manhattan_recall
value: 87.0
- type: manhattan_ap
value: 94.17
- type: euclidean_accuracy
value: 99.7604
- type: euclidean_accuracy_threshold
value: 54.986999999999995
- type: euclidean_f1
value: 87.86359999999999
- type: euclidean_f1_threshold
value: 54.986999999999995
- type: euclidean_precision
value: 88.1288
- type: euclidean_recall
value: 87.6
- type: euclidean_ap
value: 94.07140000000001
- type: dot_accuracy
value: 99.7604
- type: dot_accuracy_threshold
value: 84.88210000000001
- type: dot_f1
value: 87.86359999999999
- type: dot_f1_threshold
value: 84.88210000000001
- type: dot_precision
value: 88.1288
- type: dot_recall
value: 87.6
- type: dot_ap
value: 94.07140000000001
- type: max_accuracy
value: 99.7644
- type: max_f1
value: 87.92320000000001
- type: max_precision
value: 88.86619999999999
- type: max_recall
value: 87.6
- type: max_ap
value: 94.17
- type: main_score
value: 94.17
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 64.6589
- type: v_measure_std
value: 4.734
- type: main_score
value: 64.6589
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.9388
- type: v_measure_std
value: 1.6312
- type: main_score
value: 32.9388
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.645399999999995
- type: mrr
value: 53.5346
- type: nAUC_map_max
value: 12.8874
- type: nAUC_map_std
value: 9.2781
- type: nAUC_map_diff1
value: 39.864
- type: nAUC_mrr_max
value: 13.278
- type: nAUC_mrr_std
value: 9.501999999999999
- type: nAUC_mrr_diff1
value: 39.409499999999994
- type: main_score
value: 52.645399999999995
- task:
type: Retrieval
dataset:
name: MTEB StackOverflowQA (default)
type: CoIR-Retrieval/stackoverflow-qa
config: default
split: test
revision: db8f169f3894c14a00251061f957b2063eef2bd5
metrics:
- type: ndcg_at_1
value: 74.97500000000001
- type: ndcg_at_3
value: 81.247
- type: ndcg_at_5
value: 82.921
- type: ndcg_at_10
value: 83.92699999999999
- type: ndcg_at_20
value: 84.57000000000001
- type: ndcg_at_100
value: 85.095
- type: ndcg_at_1000
value: 85.33800000000001
- type: map_at_1
value: 74.97500000000001
- type: map_at_3
value: 79.781
- type: map_at_5
value: 80.711
- type: map_at_10
value: 81.126
- type: map_at_20
value: 81.308
- type: map_at_100
value: 81.389
- type: map_at_1000
value: 81.39699999999999
- type: recall_at_1
value: 74.97500000000001
- type: recall_at_3
value: 85.456
- type: recall_at_5
value: 89.519
- type: recall_at_10
value: 92.628
- type: recall_at_20
value: 95.135
- type: recall_at_100
value: 97.844
- type: recall_at_1000
value: 99.799
- type: precision_at_1
value: 74.97500000000001
- type: precision_at_3
value: 28.485
- type: precision_at_5
value: 17.904
- type: precision_at_10
value: 9.263
- type: precision_at_20
value: 4.757
- type: precision_at_100
value: 0.9780000000000001
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 74.9749
- type: mrr_at_3
value: 79.781
- type: mrr_at_5
value: 80.7113
- type: mrr_at_10
value: 81.12610000000001
- type: mrr_at_20
value: 81.30760000000001
- type: mrr_at_100
value: 81.38889999999999
- type: mrr_at_1000
value: 81.3974
- type: nauc_ndcg_at_1_max
value: 76.1721
- type: nauc_ndcg_at_1_std
value: -5.5159
- type: nauc_ndcg_at_1_diff1
value: 84.6697
- type: nauc_ndcg_at_3_max
value: 78.27629999999999
- type: nauc_ndcg_at_3_std
value: -1.2
- type: nauc_ndcg_at_3_diff1
value: 81.1214
- type: nauc_ndcg_at_5_max
value: 77.7687
- type: nauc_ndcg_at_5_std
value: -1.8698
- type: nauc_ndcg_at_5_diff1
value: 80.9252
- type: nauc_ndcg_at_10_max
value: 77.8029
- type: nauc_ndcg_at_10_std
value: -1.5579
- type: nauc_ndcg_at_10_diff1
value: 81.1043
- type: nauc_ndcg_at_20_max
value: 77.79310000000001
- type: nauc_ndcg_at_20_std
value: -1.7669000000000001
- type: nauc_ndcg_at_20_diff1
value: 81.4121
- type: nauc_ndcg_at_100_max
value: 77.7522
- type: nauc_ndcg_at_100_std
value: -1.4502
- type: nauc_ndcg_at_100_diff1
value: 81.684
- type: nauc_ndcg_at_1000_max
value: 77.6032
- type: nauc_ndcg_at_1000_std
value: -2.0256
- type: nauc_ndcg_at_1000_diff1
value: 81.7641
- type: nauc_map_at_1_max
value: 76.1721
- type: nauc_map_at_1_std
value: -5.5159
- type: nauc_map_at_1_diff1
value: 84.6697
- type: nauc_map_at_3_max
value: 77.6991
- type: nauc_map_at_3_std
value: -2.3189
- type: nauc_map_at_3_diff1
value: 82.0708
- type: nauc_map_at_5_max
value: 77.4286
- type: nauc_map_at_5_std
value: -2.721
- type: nauc_map_at_5_diff1
value: 82.0265
- type: nauc_map_at_10_max
value: 77.4212
- type: nauc_map_at_10_std
value: -2.633
- type: nauc_map_at_10_diff1
value: 82.109
- type: nauc_map_at_20_max
value: 77.4188
- type: nauc_map_at_20_std
value: -2.6752000000000002
- type: nauc_map_at_20_diff1
value: 82.19340000000001
- type: nauc_map_at_100_max
value: 77.4169
- type: nauc_map_at_100_std
value: -2.6487
- type: nauc_map_at_100_diff1
value: 82.2353
- type: nauc_map_at_1000_max
value: 77.413
- type: nauc_map_at_1000_std
value: -2.6639
- type: nauc_map_at_1000_diff1
value: 82.238
- type: nauc_recall_at_1_max
value: 76.1721
- type: nauc_recall_at_1_std
value: -5.5159
- type: nauc_recall_at_1_diff1
value: 84.6697
- type: nauc_recall_at_3_max
value: 80.4678
- type: nauc_recall_at_3_std
value: 3.0113000000000003
- type: nauc_recall_at_3_diff1
value: 77.5303
- type: nauc_recall_at_5_max
value: 79.2732
- type: nauc_recall_at_5_std
value: 2.0842
- type: nauc_recall_at_5_diff1
value: 75.5155
- type: nauc_recall_at_10_max
value: 80.2527
- type: nauc_recall_at_10_std
value: 5.7078
- type: nauc_recall_at_10_diff1
value: 74.4861
- type: nauc_recall_at_20_max
value: 81.29950000000001
- type: nauc_recall_at_20_std
value: 6.5553
- type: nauc_recall_at_20_diff1
value: 74.5628
- type: nauc_recall_at_100_max
value: 83.8742
- type: nauc_recall_at_100_std
value: 28.4213
- type: nauc_recall_at_100_diff1
value: 74.4027
- type: nauc_recall_at_1000_max
value: 60.9178
- type: nauc_recall_at_1000_std
value: -2.6599
- type: nauc_recall_at_1000_diff1
value: 47.6074
- type: nauc_precision_at_1_max
value: 76.1721
- type: nauc_precision_at_1_std
value: -5.5159
- type: nauc_precision_at_1_diff1
value: 84.6697
- type: nauc_precision_at_3_max
value: 80.4678
- type: nauc_precision_at_3_std
value: 3.0113000000000003
- type: nauc_precision_at_3_diff1
value: 77.5303
- type: nauc_precision_at_5_max
value: 79.2732
- type: nauc_precision_at_5_std
value: 2.0842
- type: nauc_precision_at_5_diff1
value: 75.5155
- type: nauc_precision_at_10_max
value: 80.2527
- type: nauc_precision_at_10_std
value: 5.7078
- type: nauc_precision_at_10_diff1
value: 74.4861
- type: nauc_precision_at_20_max
value: 81.29950000000001
- type: nauc_precision_at_20_std
value: 6.5553
- type: nauc_precision_at_20_diff1
value: 74.5628
- type: nauc_precision_at_100_max
value: 83.8742
- type: nauc_precision_at_100_std
value: 28.4213
- type: nauc_precision_at_100_diff1
value: 74.4027
- type: nauc_precision_at_1000_max
value: 60.9178
- type: nauc_precision_at_1000_std
value: -2.6599
- type: nauc_precision_at_1000_diff1
value: 47.6074
- type: nauc_mrr_at_1_max
value: 76.1721
- type: nauc_mrr_at_1_std
value: -5.5159
- type: nauc_mrr_at_1_diff1
value: 84.6697
- type: nauc_mrr_at_3_max
value: 77.6991
- type: nauc_mrr_at_3_std
value: -2.3189
- type: nauc_mrr_at_3_diff1
value: 82.0708
- type: nauc_mrr_at_5_max
value: 77.4286
- type: nauc_mrr_at_5_std
value: -2.721
- type: nauc_mrr_at_5_diff1
value: 82.0265
- type: nauc_mrr_at_10_max
value: 77.4212
- type: nauc_mrr_at_10_std
value: -2.633
- type: nauc_mrr_at_10_diff1
value: 82.109
- type: nauc_mrr_at_20_max
value: 77.4188
- type: nauc_mrr_at_20_std
value: -2.6752000000000002
- type: nauc_mrr_at_20_diff1
value: 82.19340000000001
- type: nauc_mrr_at_100_max
value: 77.4169
- type: nauc_mrr_at_100_std
value: -2.6487
- type: nauc_mrr_at_100_diff1
value: 82.2353
- type: nauc_mrr_at_1000_max
value: 77.413
- type: nauc_mrr_at_1000_std
value: -2.6639
- type: nauc_mrr_at_1000_diff1
value: 82.238
- type: main_score
value: 83.92699999999999
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: pearson
value: 29.8395
- type: spearman
value: 29.383
- type: cosine_spearman
value: 29.383
- type: cosine_pearson
value: 29.8395
- type: dot_spearman
value: 29.383
- type: dot_pearson
value: 29.8395
- type: main_score
value: 29.383
- task:
type: Retrieval
dataset:
name: MTEB SyntheticText2SQL (default)
type: CoIR-Retrieval/synthetic-text2sql
config: default
split: test
revision: 686b87296c3a0191b5d9415a00526c62db9fce09
metrics:
- type: ndcg_at_1
value: 4.222
- type: ndcg_at_3
value: 38.329
- type: ndcg_at_5
value: 42.076
- type: ndcg_at_10
value: 44.775
- type: ndcg_at_20
value: 46.528999999999996
- type: ndcg_at_100
value: 48.554
- type: ndcg_at_1000
value: 49.143
- type: map_at_1
value: 4.222
- type: map_at_3
value: 30.676
- type: map_at_5
value: 32.76
- type: map_at_10
value: 33.898
- type: map_at_20
value: 34.386
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.701
- type: recall_at_1
value: 4.222
- type: recall_at_3
value: 60.178
- type: recall_at_5
value: 69.253
- type: recall_at_10
value: 77.474
- type: recall_at_20
value: 84.36200000000001
- type: recall_at_100
value: 95.12899999999999
- type: recall_at_1000
value: 99.675
- type: precision_at_1
value: 4.222
- type: precision_at_3
value: 20.058999999999997
- type: precision_at_5
value: 13.850999999999999
- type: precision_at_10
value: 7.747
- type: precision_at_20
value: 4.218
- type: precision_at_100
value: 0.951
- type: precision_at_1000
value: 0.1
- type: mrr_at_1
value: 27.3287
- type: mrr_at_3
value: 43.8956
- type: mrr_at_5
value: 45.656
- type: mrr_at_10
value: 46.6697
- type: mrr_at_20
value: 47.1331
- type: mrr_at_100
value: 47.4153
- type: mrr_at_1000
value: 47.4391
- type: nauc_ndcg_at_1_max
value: 16.045
- type: nauc_ndcg_at_1_std
value: -8.7715
- type: nauc_ndcg_at_1_diff1
value: 48.4886
- type: nauc_ndcg_at_3_max
value: 30.771500000000003
- type: nauc_ndcg_at_3_std
value: -16.2537
- type: nauc_ndcg_at_3_diff1
value: -59.0158
- type: nauc_ndcg_at_5_max
value: 30.354
- type: nauc_ndcg_at_5_std
value: -16.576
- type: nauc_ndcg_at_5_diff1
value: -55.0555
- type: nauc_ndcg_at_10_max
value: 30.0579
- type: nauc_ndcg_at_10_std
value: -16.3765
- type: nauc_ndcg_at_10_diff1
value: -52.5829
- type: nauc_ndcg_at_20_max
value: 29.8131
- type: nauc_ndcg_at_20_std
value: -15.7493
- type: nauc_ndcg_at_20_diff1
value: -51.1605
- type: nauc_ndcg_at_100_max
value: 29.9313
- type: nauc_ndcg_at_100_std
value: -14.9786
- type: nauc_ndcg_at_100_diff1
value: -49.6997
- type: nauc_ndcg_at_1000_max
value: 29.7154
- type: nauc_ndcg_at_1000_std
value: -15.2567
- type: nauc_ndcg_at_1000_diff1
value: -49.660399999999996
- type: nauc_map_at_1_max
value: 16.045
- type: nauc_map_at_1_std
value: -8.7715
- type: nauc_map_at_1_diff1
value: 48.4886
- type: nauc_map_at_3_max
value: 29.6122
- type: nauc_map_at_3_std
value: -15.509500000000001
- type: nauc_map_at_3_diff1
value: -52.033300000000004
- type: nauc_map_at_5_max
value: 29.3076
- type: nauc_map_at_5_std
value: -15.7
- type: nauc_map_at_5_diff1
value: -49.1839
- type: nauc_map_at_10_max
value: 29.1468
- type: nauc_map_at_10_std
value: -15.564400000000001
- type: nauc_map_at_10_diff1
value: -47.7791
- type: nauc_map_at_20_max
value: 29.0578
- type: nauc_map_at_20_std
value: -15.3635
- type: nauc_map_at_20_diff1
value: -47.2635
- type: nauc_map_at_100_max
value: 29.0523
- type: nauc_map_at_100_std
value: -15.2602
- type: nauc_map_at_100_diff1
value: -46.9875
- type: nauc_map_at_1000_max
value: 29.048299999999998
- type: nauc_map_at_1000_std
value: -15.2626
- type: nauc_map_at_1000_diff1
value: -46.98
- type: nauc_recall_at_1_max
value: 16.045
- type: nauc_recall_at_1_std
value: -8.7715
- type: nauc_recall_at_1_diff1
value: 48.4886
- type: nauc_recall_at_3_max
value: 32.8552
- type: nauc_recall_at_3_std
value: -17.6374
- type: nauc_recall_at_3_diff1
value: -71.1273
- type: nauc_recall_at_5_max
value: 32.378299999999996
- type: nauc_recall_at_5_std
value: -18.411
- type: nauc_recall_at_5_diff1
value: -65.7517
- type: nauc_recall_at_10_max
value: 32.041799999999995
- type: nauc_recall_at_10_std
value: -18.4057
- type: nauc_recall_at_10_diff1
value: -62.019999999999996
- type: nauc_recall_at_20_max
value: 31.663999999999998
- type: nauc_recall_at_20_std
value: -16.352800000000002
- type: nauc_recall_at_20_diff1
value: -59.1186
- type: nauc_recall_at_100_max
value: 37.872499999999995
- type: nauc_recall_at_100_std
value: -4.3914
- type: nauc_recall_at_100_diff1
value: -51.8363
- type: nauc_recall_at_1000_max
value: 59.5105
- type: nauc_recall_at_1000_std
value: 23.3375
- type: nauc_recall_at_1000_diff1
value: -73.9075
- type: nauc_precision_at_1_max
value: 16.045
- type: nauc_precision_at_1_std
value: -8.7715
- type: nauc_precision_at_1_diff1
value: 48.4886
- type: nauc_precision_at_3_max
value: 32.8552
- type: nauc_precision_at_3_std
value: -17.6374
- type: nauc_precision_at_3_diff1
value: -71.1273
- type: nauc_precision_at_5_max
value: 32.378299999999996
- type: nauc_precision_at_5_std
value: -18.411
- type: nauc_precision_at_5_diff1
value: -65.7517
- type: nauc_precision_at_10_max
value: 32.041799999999995
- type: nauc_precision_at_10_std
value: -18.4057
- type: nauc_precision_at_10_diff1
value: -62.019999999999996
- type: nauc_precision_at_20_max
value: 31.663999999999998
- type: nauc_precision_at_20_std
value: -16.352800000000002
- type: nauc_precision_at_20_diff1
value: -59.1186
- type: nauc_precision_at_100_max
value: 37.872499999999995
- type: nauc_precision_at_100_std
value: -4.3914
- type: nauc_precision_at_100_diff1
value: -51.8363
- type: nauc_precision_at_1000_max
value: 59.5105
- type: nauc_precision_at_1000_std
value: 23.3375
- type: nauc_precision_at_1000_diff1
value: -73.9075
- type: nauc_mrr_at_1_max
value: 15.1452
- type: nauc_mrr_at_1_std
value: -9.760399999999999
- type: nauc_mrr_at_1_diff1
value: -39.2235
- type: nauc_mrr_at_3_max
value: 23.6826
- type: nauc_mrr_at_3_std
value: -13.300899999999999
- type: nauc_mrr_at_3_diff1
value: -55.17809999999999
- type: nauc_mrr_at_5_max
value: 23.3754
- type: nauc_mrr_at_5_std
value: -13.306299999999998
- type: nauc_mrr_at_5_diff1
value: -53.744499999999995
- type: nauc_mrr_at_10_max
value: 23.0703
- type: nauc_mrr_at_10_std
value: -13.1632
- type: nauc_mrr_at_10_diff1
value: -53.2374
- type: nauc_mrr_at_20_max
value: 22.9496
- type: nauc_mrr_at_20_std
value: -13.031
- type: nauc_mrr_at_20_diff1
value: -53.016
- type: nauc_mrr_at_100_max
value: 22.9044
- type: nauc_mrr_at_100_std
value: -12.9409
- type: nauc_mrr_at_100_diff1
value: -52.9092
- type: nauc_mrr_at_1000_max
value: 22.897100000000002
- type: nauc_mrr_at_1000_std
value: -12.940399999999999
- type: nauc_mrr_at_1000_diff1
value: -52.9095
- type: main_score
value: 44.775
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: ndcg_at_1
value: 70.0
- type: ndcg_at_3
value: 68.704
- type: ndcg_at_5
value: 67.533
- type: ndcg_at_10
value: 63.098
- type: ndcg_at_20
value: 60.507999999999996
- type: ndcg_at_100
value: 49.847
- type: ndcg_at_1000
value: 48.394999999999996
- type: map_at_1
value: 0.211
- type: map_at_3
value: 0.555
- type: map_at_5
value: 0.873
- type: map_at_10
value: 1.526
- type: map_at_20
value: 2.731
- type: map_at_100
value: 8.863
- type: map_at_1000
value: 23.162
- type: recall_at_1
value: 0.211
- type: recall_at_3
value: 0.5930000000000001
- type: recall_at_5
value: 0.962
- type: recall_at_10
value: 1.748
- type: recall_at_20
value: 3.318
- type: recall_at_100
value: 12.447999999999999
- type: recall_at_1000
value: 46.794999999999995
- type: precision_at_1
value: 76.0
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 71.6
- type: precision_at_10
value: 66.0
- type: precision_at_20
value: 63.6
- type: precision_at_100
value: 51.339999999999996
- type: precision_at_1000
value: 21.68
- type: mrr_at_1
value: 76.0
- type: mrr_at_3
value: 84.0
- type: mrr_at_5
value: 84.39999999999999
- type: mrr_at_10
value: 84.85000000000001
- type: mrr_at_20
value: 84.85000000000001
- type: mrr_at_100
value: 84.85000000000001
- type: mrr_at_1000
value: 84.85000000000001
- type: nauc_ndcg_at_1_max
value: 48.710300000000004
- type: nauc_ndcg_at_1_std
value: 72.6125
- type: nauc_ndcg_at_1_diff1
value: -19.9816
- type: nauc_ndcg_at_3_max
value: 44.8032
- type: nauc_ndcg_at_3_std
value: 64.7227
- type: nauc_ndcg_at_3_diff1
value: -25.933899999999998
- type: nauc_ndcg_at_5_max
value: 44.7004
- type: nauc_ndcg_at_5_std
value: 65.05330000000001
- type: nauc_ndcg_at_5_diff1
value: -26.0531
- type: nauc_ndcg_at_10_max
value: 49.5716
- type: nauc_ndcg_at_10_std
value: 66.18730000000001
- type: nauc_ndcg_at_10_diff1
value: -22.3525
- type: nauc_ndcg_at_20_max
value: 49.0212
- type: nauc_ndcg_at_20_std
value: 71.2387
- type: nauc_ndcg_at_20_diff1
value: -21.6522
- type: nauc_ndcg_at_100_max
value: 47.3029
- type: nauc_ndcg_at_100_std
value: 82.31819999999999
- type: nauc_ndcg_at_100_diff1
value: -27.5265
- type: nauc_ndcg_at_1000_max
value: 38.8474
- type: nauc_ndcg_at_1000_std
value: 77.1578
- type: nauc_ndcg_at_1000_diff1
value: -29.350700000000003
- type: nauc_map_at_1_max
value: 16.4698
- type: nauc_map_at_1_std
value: 9.657300000000001
- type: nauc_map_at_1_diff1
value: -4.3484
- type: nauc_map_at_3_max
value: 25.183299999999996
- type: nauc_map_at_3_std
value: 16.8245
- type: nauc_map_at_3_diff1
value: -7.1254
- type: nauc_map_at_5_max
value: 24.5899
- type: nauc_map_at_5_std
value: 19.8027
- type: nauc_map_at_5_diff1
value: -9.8547
- type: nauc_map_at_10_max
value: 34.9032
- type: nauc_map_at_10_std
value: 26.435599999999997
- type: nauc_map_at_10_diff1
value: -8.833499999999999
- type: nauc_map_at_20_max
value: 40.551700000000004
- type: nauc_map_at_20_std
value: 34.6141
- type: nauc_map_at_20_diff1
value: -8.578199999999999
- type: nauc_map_at_100_max
value: 51.403299999999994
- type: nauc_map_at_100_std
value: 68.4083
- type: nauc_map_at_100_diff1
value: -17.7135
- type: nauc_map_at_1000_max
value: 48.9955
- type: nauc_map_at_1000_std
value: 82.9784
- type: nauc_map_at_1000_diff1
value: -26.473000000000003
- type: nauc_recall_at_1_max
value: 16.4698
- type: nauc_recall_at_1_std
value: 9.657300000000001
- type: nauc_recall_at_1_diff1
value: -4.3484
- type: nauc_recall_at_3_max
value: 21.4136
- type: nauc_recall_at_3_std
value: 11.4801
- type: nauc_recall_at_3_diff1
value: -7.1396
- type: nauc_recall_at_5_max
value: 18.0314
- type: nauc_recall_at_5_std
value: 12.7486
- type: nauc_recall_at_5_diff1
value: -9.7349
- type: nauc_recall_at_10_max
value: 27.8032
- type: nauc_recall_at_10_std
value: 18.7061
- type: nauc_recall_at_10_diff1
value: -9.2739
- type: nauc_recall_at_20_max
value: 30.878299999999996
- type: nauc_recall_at_20_std
value: 26.0295
- type: nauc_recall_at_20_diff1
value: -7.8001000000000005
- type: nauc_recall_at_100_max
value: 39.4065
- type: nauc_recall_at_100_std
value: 56.112399999999994
- type: nauc_recall_at_100_diff1
value: -17.8753
- type: nauc_recall_at_1000_max
value: 31.571199999999997
- type: nauc_recall_at_1000_std
value: 65.3181
- type: nauc_recall_at_1000_diff1
value: -26.398899999999998
- type: nauc_precision_at_1_max
value: 59.8382
- type: nauc_precision_at_1_std
value: 66.9075
- type: nauc_precision_at_1_diff1
value: -5.1873000000000005
- type: nauc_precision_at_3_max
value: 55.787600000000005
- type: nauc_precision_at_3_std
value: 64.1127
- type: nauc_precision_at_3_diff1
value: -24.3791
- type: nauc_precision_at_5_max
value: 50.0544
- type: nauc_precision_at_5_std
value: 61.812599999999996
- type: nauc_precision_at_5_diff1
value: -24.5456
- type: nauc_precision_at_10_max
value: 57.4695
- type: nauc_precision_at_10_std
value: 63.7448
- type: nauc_precision_at_10_diff1
value: -22.6982
- type: nauc_precision_at_20_max
value: 57.3052
- type: nauc_precision_at_20_std
value: 72.00619999999999
- type: nauc_precision_at_20_diff1
value: -18.2329
- type: nauc_precision_at_100_max
value: 50.0873
- type: nauc_precision_at_100_std
value: 84.9689
- type: nauc_precision_at_100_diff1
value: -27.625300000000003
- type: nauc_precision_at_1000_max
value: 29.3103
- type: nauc_precision_at_1000_std
value: 57.898700000000005
- type: nauc_precision_at_1000_diff1
value: -28.8765
- type: nauc_mrr_at_1_max
value: 59.8382
- type: nauc_mrr_at_1_std
value: 66.9075
- type: nauc_mrr_at_1_diff1
value: -5.1873000000000005
- type: nauc_mrr_at_3_max
value: 58.4682
- type: nauc_mrr_at_3_std
value: 64.6751
- type: nauc_mrr_at_3_diff1
value: -5.9737
- type: nauc_mrr_at_5_max
value: 59.099999999999994
- type: nauc_mrr_at_5_std
value: 63.6902
- type: nauc_mrr_at_5_diff1
value: -6.482499999999999
- type: nauc_mrr_at_10_max
value: 57.9638
- type: nauc_mrr_at_10_std
value: 63.716300000000004
- type: nauc_mrr_at_10_diff1
value: -5.6598999999999995
- type: nauc_mrr_at_20_max
value: 57.9638
- type: nauc_mrr_at_20_std
value: 63.716300000000004
- type: nauc_mrr_at_20_diff1
value: -5.6598999999999995
- type: nauc_mrr_at_100_max
value: 57.9638
- type: nauc_mrr_at_100_std
value: 63.716300000000004
- type: nauc_mrr_at_100_diff1
value: -5.6598999999999995
- type: nauc_mrr_at_1000_max
value: 57.9638
- type: nauc_mrr_at_1000_std
value: 63.716300000000004
- type: nauc_mrr_at_1000_diff1
value: -5.6598999999999995
- type: main_score
value: 63.098
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: ndcg_at_1
value: 23.469
- type: ndcg_at_3
value: 25.522
- type: ndcg_at_5
value: 24.333
- type: ndcg_at_10
value: 24.029
- type: ndcg_at_20
value: 24.573
- type: ndcg_at_100
value: 34.425
- type: ndcg_at_1000
value: 46.907
- type: map_at_1
value: 1.976
- type: map_at_3
value: 4.589
- type: map_at_5
value: 6.555999999999999
- type: map_at_10
value: 9.687999999999999
- type: map_at_20
value: 11.926
- type: map_at_100
value: 15.116999999999999
- type: map_at_1000
value: 16.769000000000002
- type: recall_at_1
value: 1.976
- type: recall_at_3
value: 6.101
- type: recall_at_5
value: 9.68
- type: recall_at_10
value: 16.633
- type: recall_at_20
value: 23.589
- type: recall_at_100
value: 45.61
- type: recall_at_1000
value: 82.48100000000001
- type: precision_at_1
value: 26.531
- type: precision_at_3
value: 27.891
- type: precision_at_5
value: 25.714
- type: precision_at_10
value: 22.448999999999998
- type: precision_at_20
value: 16.837
- type: precision_at_100
value: 7.122000000000001
- type: precision_at_1000
value: 1.5270000000000001
- type: mrr_at_1
value: 26.5306
- type: mrr_at_3
value: 39.1156
- type: mrr_at_5
value: 41.1565
- type: mrr_at_10
value: 43.863
- type: mrr_at_20
value: 44.5963
- type: mrr_at_100
value: 44.766600000000004
- type: mrr_at_1000
value: 44.766600000000004
- type: nauc_ndcg_at_1_max
value: -31.661099999999998
- type: nauc_ndcg_at_1_std
value: 2.8871
- type: nauc_ndcg_at_1_diff1
value: 3.4787
- type: nauc_ndcg_at_3_max
value: -34.6673
- type: nauc_ndcg_at_3_std
value: -3.8882
- type: nauc_ndcg_at_3_diff1
value: 0.6512
- type: nauc_ndcg_at_5_max
value: -33.815
- type: nauc_ndcg_at_5_std
value: 0.20209999999999997
- type: nauc_ndcg_at_5_diff1
value: -6.4072000000000005
- type: nauc_ndcg_at_10_max
value: -26.9953
- type: nauc_ndcg_at_10_std
value: -3.6511
- type: nauc_ndcg_at_10_diff1
value: -3.8763
- type: nauc_ndcg_at_20_max
value: -30.218600000000002
- type: nauc_ndcg_at_20_std
value: -1.4384
- type: nauc_ndcg_at_20_diff1
value: -8.5927
- type: nauc_ndcg_at_100_max
value: -32.1409
- type: nauc_ndcg_at_100_std
value: 20.1662
- type: nauc_ndcg_at_100_diff1
value: -12.0591
- type: nauc_ndcg_at_1000_max
value: -31.6892
- type: nauc_ndcg_at_1000_std
value: 32.1464
- type: nauc_ndcg_at_1000_diff1
value: -8.3651
- type: nauc_map_at_1_max
value: -41.9612
- type: nauc_map_at_1_std
value: -11.0332
- type: nauc_map_at_1_diff1
value: -5.2508
- type: nauc_map_at_3_max
value: -30.4968
- type: nauc_map_at_3_std
value: -11.138
- type: nauc_map_at_3_diff1
value: -0.8447
- type: nauc_map_at_5_max
value: -24.7543
- type: nauc_map_at_5_std
value: -10.302
- type: nauc_map_at_5_diff1
value: -10.0762
- type: nauc_map_at_10_max
value: -20.420099999999998
- type: nauc_map_at_10_std
value: -10.485
- type: nauc_map_at_10_diff1
value: -10.3134
- type: nauc_map_at_20_max
value: -20.8606
- type: nauc_map_at_20_std
value: -6.3984
- type: nauc_map_at_20_diff1
value: -10.8605
- type: nauc_map_at_100_max
value: -22.6385
- type: nauc_map_at_100_std
value: 3.8738
- type: nauc_map_at_100_diff1
value: -12.9055
- type: nauc_map_at_1000_max
value: -23.0823
- type: nauc_map_at_1000_std
value: 8.6942
- type: nauc_map_at_1000_diff1
value: -13.1715
- type: nauc_recall_at_1_max
value: -41.9612
- type: nauc_recall_at_1_std
value: -11.0332
- type: nauc_recall_at_1_diff1
value: -5.2508
- type: nauc_recall_at_3_max
value: -25.9715
- type: nauc_recall_at_3_std
value: -14.9623
- type: nauc_recall_at_3_diff1
value: -4.2583
- type: nauc_recall_at_5_max
value: -24.5848
- type: nauc_recall_at_5_std
value: -14.258299999999998
- type: nauc_recall_at_5_diff1
value: -13.1162
- type: nauc_recall_at_10_max
value: -22.3834
- type: nauc_recall_at_10_std
value: -15.274199999999999
- type: nauc_recall_at_10_diff1
value: -10.8836
- type: nauc_recall_at_20_max
value: -22.8634
- type: nauc_recall_at_20_std
value: -4.8215
- type: nauc_recall_at_20_diff1
value: -11.1747
- type: nauc_recall_at_100_max
value: -25.9537
- type: nauc_recall_at_100_std
value: 29.75
- type: nauc_recall_at_100_diff1
value: -15.512799999999999
- type: nauc_recall_at_1000_max
value: -18.9449
- type: nauc_recall_at_1000_std
value: 69.619
- type: nauc_recall_at_1000_diff1
value: -5.629300000000001
- type: nauc_precision_at_1_max
value: -33.7627
- type: nauc_precision_at_1_std
value: 1.8065000000000002
- type: nauc_precision_at_1_diff1
value: 5.3592
- type: nauc_precision_at_3_max
value: -30.7992
- type: nauc_precision_at_3_std
value: -6.285399999999999
- type: nauc_precision_at_3_diff1
value: 1.1098000000000001
- type: nauc_precision_at_5_max
value: -27.8949
- type: nauc_precision_at_5_std
value: -1.8754
- type: nauc_precision_at_5_diff1
value: -8.0528
- type: nauc_precision_at_10_max
value: -19.659299999999998
- type: nauc_precision_at_10_std
value: -0.9809999999999999
- type: nauc_precision_at_10_diff1
value: -2.0972999999999997
- type: nauc_precision_at_20_max
value: -25.810899999999997
- type: nauc_precision_at_20_std
value: 19.5577
- type: nauc_precision_at_20_diff1
value: -8.879199999999999
- type: nauc_precision_at_100_max
value: -21.1488
- type: nauc_precision_at_100_std
value: 65.00200000000001
- type: nauc_precision_at_100_diff1
value: -11.740499999999999
- type: nauc_precision_at_1000_max
value: 20.7392
- type: nauc_precision_at_1000_std
value: 38.2851
- type: nauc_precision_at_1000_diff1
value: 17.4954
- type: nauc_mrr_at_1_max
value: -33.7627
- type: nauc_mrr_at_1_std
value: 1.8065000000000002
- type: nauc_mrr_at_1_diff1
value: 5.3592
- type: nauc_mrr_at_3_max
value: -39.837
- type: nauc_mrr_at_3_std
value: -5.3861
- type: nauc_mrr_at_3_diff1
value: -4.1776
- type: nauc_mrr_at_5_max
value: -39.756099999999996
- type: nauc_mrr_at_5_std
value: -5.3674
- type: nauc_mrr_at_5_diff1
value: -2.4693
- type: nauc_mrr_at_10_max
value: -37.7379
- type: nauc_mrr_at_10_std
value: -6.2844
- type: nauc_mrr_at_10_diff1
value: -0.6525000000000001
- type: nauc_mrr_at_20_max
value: -38.4522
- type: nauc_mrr_at_20_std
value: -5.0927
- type: nauc_mrr_at_20_diff1
value: -0.2814
- type: nauc_mrr_at_100_max
value: -38.1599
- type: nauc_mrr_at_100_std
value: -5.2147
- type: nauc_mrr_at_100_diff1
value: -0.7001000000000001
- type: nauc_mrr_at_1000_max
value: -38.1599
- type: nauc_mrr_at_1000_std
value: -5.2147
- type: nauc_mrr_at_1000_diff1
value: -0.7001000000000001
- type: main_score
value: 24.029
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 62.9395
- type: f1
value: 47.7133
- type: f1_weighted
value: 71.0525
- type: ap
value: 10.306600000000001
- type: ap_weighted
value: 10.306600000000001
- type: main_score
value: 62.9395
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 52.8721
- type: f1
value: 53.034800000000004
- type: f1_weighted
value: 52.4319
- type: main_score
value: 52.8721
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 44.9227
- type: v_measure_std
value: 1.1638000000000002
- type: main_score
value: 44.9227
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: similarity_accuracy
value: 82.04090000000001
- type: similarity_accuracy_threshold
value: 86.6147
- type: similarity_f1
value: 57.258399999999995
- type: similarity_f1_threshold
value: 82.9233
- type: similarity_precision
value: 52.1456
- type: similarity_recall
value: 63.4828
- type: similarity_ap
value: 60.0317
- type: cosine_accuracy
value: 82.04090000000001
- type: cosine_accuracy_threshold
value: 86.6147
- type: cosine_f1
value: 57.258399999999995
- type: cosine_f1_threshold
value: 82.9233
- type: cosine_precision
value: 52.1456
- type: cosine_recall
value: 63.4828
- type: cosine_ap
value: 60.0317
- type: manhattan_accuracy
value: 81.9574
- type: manhattan_accuracy_threshold
value: 794.4433
- type: manhattan_f1
value: 57.1936
- type: manhattan_f1_threshold
value: 898.9445
- type: manhattan_precision
value: 51.91480000000001
- type: manhattan_recall
value: 63.6675
- type: manhattan_ap
value: 59.9255
- type: euclidean_accuracy
value: 82.04090000000001
- type: euclidean_accuracy_threshold
value: 51.7403
- type: euclidean_f1
value: 57.258399999999995
- type: euclidean_f1_threshold
value: 58.440999999999995
- type: euclidean_precision
value: 52.1456
- type: euclidean_recall
value: 63.4828
- type: euclidean_ap
value: 60.0317
- type: dot_accuracy
value: 82.04090000000001
- type: dot_accuracy_threshold
value: 86.6147
- type: dot_f1
value: 57.258399999999995
- type: dot_f1_threshold
value: 82.9233
- type: dot_precision
value: 52.1456
- type: dot_recall
value: 63.4828
- type: dot_ap
value: 60.0317
- type: max_accuracy
value: 82.04090000000001
- type: max_f1
value: 57.258399999999995
- type: max_precision
value: 52.1456
- type: max_recall
value: 63.6675
- type: max_ap
value: 60.0317
- type: main_score
value: 60.0317
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: similarity_accuracy
value: 87.3035
- type: similarity_accuracy_threshold
value: 85.4123
- type: similarity_f1
value: 74.5555
- type: similarity_f1_threshold
value: 83.7581
- type: similarity_precision
value: 72.55369999999999
- type: similarity_recall
value: 76.6708
- type: similarity_ap
value: 82.42930000000001
- type: cosine_accuracy
value: 87.3035
- type: cosine_accuracy_threshold
value: 85.4123
- type: cosine_f1
value: 74.5555
- type: cosine_f1_threshold
value: 83.7581
- type: cosine_precision
value: 72.55369999999999
- type: cosine_recall
value: 76.6708
- type: cosine_ap
value: 82.42930000000001
- type: manhattan_accuracy
value: 87.3249
- type: manhattan_accuracy_threshold
value: 831.9304999999999
- type: manhattan_f1
value: 74.8665
- type: manhattan_f1_threshold
value: 893.9980999999999
- type: manhattan_precision
value: 70.8502
- type: manhattan_recall
value: 79.3656
- type: manhattan_ap
value: 82.5792
- type: euclidean_accuracy
value: 87.3035
- type: euclidean_accuracy_threshold
value: 54.014300000000006
- type: euclidean_f1
value: 74.5555
- type: euclidean_f1_threshold
value: 56.9946
- type: euclidean_precision
value: 72.55369999999999
- type: euclidean_recall
value: 76.6708
- type: euclidean_ap
value: 82.42920000000001
- type: dot_accuracy
value: 87.3035
- type: dot_accuracy_threshold
value: 85.4123
- type: dot_f1
value: 74.5555
- type: dot_f1_threshold
value: 83.7581
- type: dot_precision
value: 72.55369999999999
- type: dot_recall
value: 76.6708
- type: dot_ap
value: 82.42920000000001
- type: max_accuracy
value: 87.3249
- type: max_f1
value: 74.8665
- type: max_precision
value: 72.55369999999999
- type: max_recall
value: 79.3656
- type: max_ap
value: 82.5792
- type: main_score
value: 82.5792
---
# danbev/granite-embedding-30m-english-Q8_0-GGUF
This model was converted to GGUF format from [`ibm-granite/granite-embedding-30m-english`](https://huggingface.co/ibm-granite/granite-embedding-30m-english) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-embedding-30m-english) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo danbev/granite-embedding-30m-english-Q8_0-GGUF --hf-file granite-embedding-30m-english-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo danbev/granite-embedding-30m-english-Q8_0-GGUF --hf-file granite-embedding-30m-english-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo danbev/granite-embedding-30m-english-Q8_0-GGUF --hf-file granite-embedding-30m-english-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo danbev/granite-embedding-30m-english-Q8_0-GGUF --hf-file granite-embedding-30m-english-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
0xMaka/based-bert-sc | 0xMaka | text-classification | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:0xMaka/trading-candles-subset-sc-format",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-11T17:56:55 | 2023-07-11T22:28:41 | 49 | 1 | ---
datasets:
- 0xMaka/trading-candles-subset-sc-format
language:
- en
license: gpl
metrics:
- accuracy
- f1
widget:
- text: 'identify candle: 17284.58,17264.41,17284.58,17264.41'
example_title: Bear
- text: 'identify candle: open: 17343.43, close: 17625.18, high: 17804.68, low: 17322.15'
example_title: Bull
---
# Based Bert for sequence classification
This model is a POC and shouldn't be used for any production task.
## Model description
Based Bert SC is a text classification bot for binary classification of a trading candles opening and closing prices.
## Uses and limitations
This model can reliably return the bullish or bearish status of a candle given the opening, closing, high and low, in a format shown.
It will have trouble if the order of the numbers change (even if tags are included).
### How to use
You can use this model directly with a pipeline
```python
>>> from transformers import pipeline
>>> pipe = pipeline("text-classification", model="0xMaka/based-bert-sc")
>>> text = "identify candle: open: 21788.19, close: 21900, high: 21965.23, low: 21788.19"
>>> pipe(text)
[{'label': 'Bullish', 'score': 0.9999682903289795}]
```
## Finetuning
For parameters: https://github.com/0xMaka/based-bert-sc/blob/main/trainer.py
This model was fine tuned on an RTX-3060-Mobile
```
// BUS_WIDTH = 192
// CLOCK_RATE = 1750
// DDR_MULTI = 8 // DDR6
// BWTheoretical = (((CLOCK_RATE * (10 ** 6)) * (BUS_WIDTH/8)) * DDR_MULI) / (10 ** 9)
// BWTheoretical == 336 GB/s
```
Self-measured effective (GB/s): 316.280736
| [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
ef-zulla/e5-multi-sml-torch | ef-zulla | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2212.03533",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-10-16T09:22:58 | 2023-10-16T09:27:15 | 49 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-small
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.79104477611939
- type: ap
value: 36.9996434842022
- type: f1
value: 67.95453679103099
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.64882226980728
- type: ap
value: 82.11942130026586
- type: f1
value: 69.87963421606715
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.8095952023988
- type: ap
value: 24.46869495579561
- type: f1
value: 63.00108480037597
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 64.186295503212
- type: ap
value: 15.496804690197042
- type: f1
value: 52.07153895475031
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.699325
- type: ap
value: 85.27039559917269
- type: f1
value: 88.65556295032513
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.69799999999999
- type: f1
value: 43.73187348654165
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.245999999999995
- type: f1
value: 39.3863530637684
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.394
- type: f1
value: 39.301223469483446
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.864
- type: f1
value: 37.97974261868003
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.682
- type: f1
value: 37.07399369768313
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.504
- type: f1
value: 36.62317273874278
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.061
- type: map_at_10
value: 31.703
- type: map_at_100
value: 32.967
- type: map_at_1000
value: 33.001000000000005
- type: map_at_3
value: 27.466
- type: map_at_5
value: 29.564
- type: mrr_at_1
value: 19.559
- type: mrr_at_10
value: 31.874999999999996
- type: mrr_at_100
value: 33.146
- type: mrr_at_1000
value: 33.18
- type: mrr_at_3
value: 27.667
- type: mrr_at_5
value: 29.74
- type: ndcg_at_1
value: 19.061
- type: ndcg_at_10
value: 39.062999999999995
- type: ndcg_at_100
value: 45.184000000000005
- type: ndcg_at_1000
value: 46.115
- type: ndcg_at_3
value: 30.203000000000003
- type: ndcg_at_5
value: 33.953
- type: precision_at_1
value: 19.061
- type: precision_at_10
value: 6.279999999999999
- type: precision_at_100
value: 0.9129999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 12.706999999999999
- type: precision_at_5
value: 9.431000000000001
- type: recall_at_1
value: 19.061
- type: recall_at_10
value: 62.802
- type: recall_at_100
value: 91.323
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 38.122
- type: recall_at_5
value: 47.155
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 39.22266660528253
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 30.79980849482483
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 57.8790068352054
- type: mrr
value: 71.78791276436706
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.36328364043163
- type: cos_sim_spearman
value: 82.26211536195868
- type: euclidean_pearson
value: 80.3183865039173
- type: euclidean_spearman
value: 79.88495276296132
- type: manhattan_pearson
value: 80.14484480692127
- type: manhattan_spearman
value: 80.39279565980743
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.0375782881002
- type: f1
value: 97.86012526096033
- type: precision
value: 97.77139874739039
- type: recall
value: 98.0375782881002
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 93.35241030156286
- type: f1
value: 92.66050333846944
- type: precision
value: 92.3306919069631
- type: recall
value: 93.35241030156286
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 94.0699688257707
- type: f1
value: 93.50236693222492
- type: precision
value: 93.22791825424315
- type: recall
value: 94.0699688257707
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 89.25750394944708
- type: f1
value: 88.79234684921889
- type: precision
value: 88.57293312269616
- type: recall
value: 89.25750394944708
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 79.41558441558442
- type: f1
value: 79.25886487487219
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.747820820329736
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 27.045143830596146
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.252999999999997
- type: map_at_10
value: 31.655916666666666
- type: map_at_100
value: 32.680749999999996
- type: map_at_1000
value: 32.79483333333334
- type: map_at_3
value: 29.43691666666666
- type: map_at_5
value: 30.717416666666665
- type: mrr_at_1
value: 28.602750000000004
- type: mrr_at_10
value: 35.56875
- type: mrr_at_100
value: 36.3595
- type: mrr_at_1000
value: 36.427749999999996
- type: mrr_at_3
value: 33.586166666666664
- type: mrr_at_5
value: 34.73641666666666
- type: ndcg_at_1
value: 28.602750000000004
- type: ndcg_at_10
value: 36.06933333333334
- type: ndcg_at_100
value: 40.70141666666667
- type: ndcg_at_1000
value: 43.24341666666667
- type: ndcg_at_3
value: 32.307916666666664
- type: ndcg_at_5
value: 34.129999999999995
- type: precision_at_1
value: 28.602750000000004
- type: precision_at_10
value: 6.097666666666667
- type: precision_at_100
value: 0.9809166666666668
- type: precision_at_1000
value: 0.13766666666666663
- type: precision_at_3
value: 14.628166666666667
- type: precision_at_5
value: 10.266916666666667
- type: recall_at_1
value: 24.252999999999997
- type: recall_at_10
value: 45.31916666666667
- type: recall_at_100
value: 66.03575000000001
- type: recall_at_1000
value: 83.94708333333334
- type: recall_at_3
value: 34.71941666666666
- type: recall_at_5
value: 39.46358333333333
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.024000000000001
- type: map_at_10
value: 15.644
- type: map_at_100
value: 17.154
- type: map_at_1000
value: 17.345
- type: map_at_3
value: 13.028
- type: map_at_5
value: 14.251
- type: mrr_at_1
value: 19.674
- type: mrr_at_10
value: 29.826999999999998
- type: mrr_at_100
value: 30.935000000000002
- type: mrr_at_1000
value: 30.987
- type: mrr_at_3
value: 26.645000000000003
- type: mrr_at_5
value: 28.29
- type: ndcg_at_1
value: 19.674
- type: ndcg_at_10
value: 22.545
- type: ndcg_at_100
value: 29.207
- type: ndcg_at_1000
value: 32.912
- type: ndcg_at_3
value: 17.952
- type: ndcg_at_5
value: 19.363
- type: precision_at_1
value: 19.674
- type: precision_at_10
value: 7.212000000000001
- type: precision_at_100
value: 1.435
- type: precision_at_1000
value: 0.212
- type: precision_at_3
value: 13.507
- type: precision_at_5
value: 10.397
- type: recall_at_1
value: 9.024000000000001
- type: recall_at_10
value: 28.077999999999996
- type: recall_at_100
value: 51.403
- type: recall_at_1000
value: 72.406
- type: recall_at_3
value: 16.768
- type: recall_at_5
value: 20.737
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.012
- type: map_at_10
value: 17.138
- type: map_at_100
value: 24.146
- type: map_at_1000
value: 25.622
- type: map_at_3
value: 12.552
- type: map_at_5
value: 14.435
- type: mrr_at_1
value: 62.25000000000001
- type: mrr_at_10
value: 71.186
- type: mrr_at_100
value: 71.504
- type: mrr_at_1000
value: 71.514
- type: mrr_at_3
value: 69.333
- type: mrr_at_5
value: 70.408
- type: ndcg_at_1
value: 49.75
- type: ndcg_at_10
value: 37.76
- type: ndcg_at_100
value: 42.071
- type: ndcg_at_1000
value: 49.309
- type: ndcg_at_3
value: 41.644
- type: ndcg_at_5
value: 39.812999999999995
- type: precision_at_1
value: 62.25000000000001
- type: precision_at_10
value: 30.15
- type: precision_at_100
value: 9.753
- type: precision_at_1000
value: 1.9189999999999998
- type: precision_at_3
value: 45.667
- type: precision_at_5
value: 39.15
- type: recall_at_1
value: 8.012
- type: recall_at_10
value: 22.599
- type: recall_at_100
value: 48.068
- type: recall_at_1000
value: 71.328
- type: recall_at_3
value: 14.043
- type: recall_at_5
value: 17.124
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 42.455
- type: f1
value: 37.59462649781862
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.092
- type: map_at_10
value: 69.586
- type: map_at_100
value: 69.968
- type: map_at_1000
value: 69.982
- type: map_at_3
value: 67.48100000000001
- type: map_at_5
value: 68.915
- type: mrr_at_1
value: 62.166
- type: mrr_at_10
value: 73.588
- type: mrr_at_100
value: 73.86399999999999
- type: mrr_at_1000
value: 73.868
- type: mrr_at_3
value: 71.6
- type: mrr_at_5
value: 72.99
- type: ndcg_at_1
value: 62.166
- type: ndcg_at_10
value: 75.27199999999999
- type: ndcg_at_100
value: 76.816
- type: ndcg_at_1000
value: 77.09700000000001
- type: ndcg_at_3
value: 71.36
- type: ndcg_at_5
value: 73.785
- type: precision_at_1
value: 62.166
- type: precision_at_10
value: 9.716
- type: precision_at_100
value: 1.065
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 28.278
- type: precision_at_5
value: 18.343999999999998
- type: recall_at_1
value: 58.092
- type: recall_at_10
value: 88.73400000000001
- type: recall_at_100
value: 95.195
- type: recall_at_1000
value: 97.04599999999999
- type: recall_at_3
value: 78.45
- type: recall_at_5
value: 84.316
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 16.649
- type: map_at_10
value: 26.457000000000004
- type: map_at_100
value: 28.169
- type: map_at_1000
value: 28.352
- type: map_at_3
value: 23.305
- type: map_at_5
value: 25.169000000000004
- type: mrr_at_1
value: 32.407000000000004
- type: mrr_at_10
value: 40.922
- type: mrr_at_100
value: 41.931000000000004
- type: mrr_at_1000
value: 41.983
- type: mrr_at_3
value: 38.786
- type: mrr_at_5
value: 40.205999999999996
- type: ndcg_at_1
value: 32.407000000000004
- type: ndcg_at_10
value: 33.314
- type: ndcg_at_100
value: 40.312
- type: ndcg_at_1000
value: 43.685
- type: ndcg_at_3
value: 30.391000000000002
- type: ndcg_at_5
value: 31.525
- type: precision_at_1
value: 32.407000000000004
- type: precision_at_10
value: 8.966000000000001
- type: precision_at_100
value: 1.6019999999999999
- type: precision_at_1000
value: 0.22200000000000003
- type: precision_at_3
value: 20.165
- type: precision_at_5
value: 14.722
- type: recall_at_1
value: 16.649
- type: recall_at_10
value: 39.117000000000004
- type: recall_at_100
value: 65.726
- type: recall_at_1000
value: 85.784
- type: recall_at_3
value: 27.914
- type: recall_at_5
value: 33.289
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 36.253
- type: map_at_10
value: 56.16799999999999
- type: map_at_100
value: 57.06099999999999
- type: map_at_1000
value: 57.126
- type: map_at_3
value: 52.644999999999996
- type: map_at_5
value: 54.909
- type: mrr_at_1
value: 72.505
- type: mrr_at_10
value: 79.66
- type: mrr_at_100
value: 79.869
- type: mrr_at_1000
value: 79.88
- type: mrr_at_3
value: 78.411
- type: mrr_at_5
value: 79.19800000000001
- type: ndcg_at_1
value: 72.505
- type: ndcg_at_10
value: 65.094
- type: ndcg_at_100
value: 68.219
- type: ndcg_at_1000
value: 69.515
- type: ndcg_at_3
value: 59.99
- type: ndcg_at_5
value: 62.909000000000006
- type: precision_at_1
value: 72.505
- type: precision_at_10
value: 13.749
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 38.357
- type: precision_at_5
value: 25.313000000000002
- type: recall_at_1
value: 36.253
- type: recall_at_10
value: 68.744
- type: recall_at_100
value: 80.925
- type: recall_at_1000
value: 89.534
- type: recall_at_3
value: 57.535000000000004
- type: recall_at_5
value: 63.282000000000004
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 80.82239999999999
- type: ap
value: 75.65895781725314
- type: f1
value: 80.75880969095746
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.624
- type: map_at_10
value: 34.075
- type: map_at_100
value: 35.229
- type: map_at_1000
value: 35.276999999999994
- type: map_at_3
value: 30.245
- type: map_at_5
value: 32.42
- type: mrr_at_1
value: 22.264
- type: mrr_at_10
value: 34.638000000000005
- type: mrr_at_100
value: 35.744
- type: mrr_at_1000
value: 35.787
- type: mrr_at_3
value: 30.891000000000002
- type: mrr_at_5
value: 33.042
- type: ndcg_at_1
value: 22.264
- type: ndcg_at_10
value: 40.991
- type: ndcg_at_100
value: 46.563
- type: ndcg_at_1000
value: 47.743
- type: ndcg_at_3
value: 33.198
- type: ndcg_at_5
value: 37.069
- type: precision_at_1
value: 22.264
- type: precision_at_10
value: 6.5089999999999995
- type: precision_at_100
value: 0.9299999999999999
- type: precision_at_1000
value: 0.10300000000000001
- type: precision_at_3
value: 14.216999999999999
- type: precision_at_5
value: 10.487
- type: recall_at_1
value: 21.624
- type: recall_at_10
value: 62.303
- type: recall_at_100
value: 88.124
- type: recall_at_1000
value: 97.08
- type: recall_at_3
value: 41.099999999999994
- type: recall_at_5
value: 50.381
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.06703146374831
- type: f1
value: 90.86867815863172
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.46970977740209
- type: f1
value: 86.36832872036588
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.26951300867245
- type: f1
value: 88.93561193959502
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.22799874725963
- type: f1
value: 84.30490069236556
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.02007888131948
- type: f1
value: 85.39376041027991
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 85.34900542495481
- type: f1
value: 85.39859673336713
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.078431372549
- type: f1
value: 53.45071102002276
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 65.85798816568047
- type: f1
value: 46.53112748993529
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.96864576384256
- type: f1
value: 45.966703022829506
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 61.31537738803633
- type: f1
value: 45.52601712835461
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 66.29616349946218
- type: f1
value: 47.24166485726613
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.51537070524412
- type: f1
value: 49.463476319014276
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.06792199058508
- type: f1
value: 54.094921857502285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.960322797579025
- type: f1
value: 48.547371223370945
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.425016812373904
- type: f1
value: 50.47069202054312
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.798251513113655
- type: f1
value: 57.05013069086648
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.37794216543376
- type: f1
value: 56.3607992649805
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 46.56018829858777
- type: f1
value: 43.87319715715134
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.9724277067922
- type: f1
value: 59.36480066245562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.72696704774715
- type: f1
value: 59.143595966615855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.5971755211836
- type: f1
value: 59.169445724946726
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.29589778076665
- type: f1
value: 67.7577001808977
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.31136516476126
- type: f1
value: 64.52032955983242
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 61.47903120066317
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.45595158036314
- type: f1
value: 58.0891846024637
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.47074646940149
- type: f1
value: 62.84830858877575
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.046402151983855
- type: f1
value: 55.269074430533195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06523201075991
- type: f1
value: 61.35339643021369
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.954942837928726
- type: f1
value: 57.07035922704846
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.404169468728995
- type: f1
value: 53.94259011839138
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.16610625420309
- type: f1
value: 61.337103431499365
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 52.262945527908535
- type: f1
value: 49.7610691598921
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.54472091459314
- type: f1
value: 63.469099018440154
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.22797579018157
- type: f1
value: 64.89098471083001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.847343644922674
- type: f1
value: 47.8536963168393
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.45326160053799
- type: f1
value: 46.370078045805556
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 42.83120376597175
- type: f1
value: 39.68948521599982
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.5084061869536
- type: f1
value: 53.961876160401545
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.7895090786819
- type: f1
value: 61.134223684676
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98991257565569
- type: f1
value: 52.579862862826296
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.90316072629456
- type: f1
value: 58.203024538290336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.09818426361802
- type: f1
value: 54.22718458445455
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.991257565568255
- type: f1
value: 55.84892781767421
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.901143241425686
- type: f1
value: 52.25264332199797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.96368527236047
- type: f1
value: 58.927243876153454
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.64223268325489
- type: f1
value: 62.340453718379706
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.52589105581708
- type: f1
value: 61.661113187022174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.84599865501009
- type: f1
value: 64.59342572873005
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.81035642232684
- type: f1
value: 57.5169089806797
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.75991930060525
- type: f1
value: 62.89531115787938
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.51647612642906
- type: f1
value: 54.33154780100043
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.985877605917956
- type: f1
value: 54.46187524463802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.03026227303296
- type: f1
value: 62.34377392877748
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.567585743106925
- type: f1
value: 50.73770655983206
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.2595830531271
- type: f1
value: 53.657327291708626
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.82784129119032
- type: f1
value: 54.82518072665301
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.06859448554137
- type: f1
value: 63.00185280500495
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.91055817081371
- type: f1
value: 55.54116301224262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.54404841963686
- type: f1
value: 59.57650946030184
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.27706792199059
- type: f1
value: 56.50010066083435
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.0719569603228
- type: f1
value: 61.817075925647956
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.23806321452591
- type: f1
value: 65.24917026029749
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.53530598520511
- type: f1
value: 61.71131132295768
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.04303967720243
- type: f1
value: 60.3950085685985
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.83591123066578
- type: f1
value: 54.95059828830849
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.62340282447881
- type: f1
value: 59.525159996498225
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.85406859448555
- type: f1
value: 59.129299095681276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.76731674512441
- type: f1
value: 61.159560612627715
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.181573638197705
- type: f1
value: 46.98422176289957
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.92737054472092
- type: f1
value: 67.69135611952979
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18964357767318
- type: f1
value: 68.46106138186214
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.0712844653665
- type: f1
value: 66.75545422473901
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4754539340955
- type: f1
value: 74.38427146553252
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.82515131136518
- type: f1
value: 69.63516462173847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.70880968392737
- type: f1
value: 67.45420662567926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.95494283792871
- type: f1
value: 65.06191009049222
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.75924680564896
- type: f1
value: 68.30833379585945
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.806321452589096
- type: f1
value: 63.273048243765054
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.68997982515133
- type: f1
value: 66.54703855381324
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.46940147948891
- type: f1
value: 65.91017343463396
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.49899125756556
- type: f1
value: 57.90333469917769
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.9219905850706
- type: f1
value: 67.23169403762938
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.486213853396094
- type: f1
value: 54.85282355583758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.04169468728985
- type: f1
value: 68.83833333320462
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.88702084734365
- type: f1
value: 74.04474735232299
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.63416274377943
- type: f1
value: 55.11332211687954
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.23604572965702
- type: f1
value: 50.86529813991055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 46.62407531943511
- type: f1
value: 43.63485467164535
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.15601882985878
- type: f1
value: 57.522837510959924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.84532616005382
- type: f1
value: 69.60021127179697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 56.65770006724949
- type: f1
value: 55.84219135523227
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.53665097511768
- type: f1
value: 65.09087787792639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.31405514458642
- type: f1
value: 58.06135303831491
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.88231338264964
- type: f1
value: 62.751099407787926
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.86012104909213
- type: f1
value: 56.29118323058282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.37390719569602
- type: f1
value: 66.27922244885102
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.8675184936113
- type: f1
value: 70.22146529932019
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.2212508406187
- type: f1
value: 67.77454802056282
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.18090114324143
- type: f1
value: 68.03737625431621
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 63.792945486912856
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.48217888365838
- type: f1
value: 69.96028997292197
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.17821116341627
- type: f1
value: 59.3935969827171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.86146603900471
- type: f1
value: 60.133692735032376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.89441829186282
- type: f1
value: 70.03064076194089
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.15063887020847
- type: f1
value: 56.23326278499678
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.43846671149966
- type: f1
value: 57.70440450281974
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8507061197041
- type: f1
value: 59.22916396061171
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.65568258238063
- type: f1
value: 69.90736239440633
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.8843308675185
- type: f1
value: 59.30332663713599
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.05312710154674
- type: f1
value: 67.44024062594775
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.111634162743776
- type: f1
value: 60.89083013084519
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44115669132482
- type: f1
value: 67.92227541674552
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4687289845326
- type: f1
value: 74.16376793486025
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.31876260928043
- type: f1
value: 68.5246745215607
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 30.90431696479766
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 27.259158476693774
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.28445330838555
- type: mrr
value: 31.15758529581164
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.353
- type: map_at_10
value: 11.565
- type: map_at_100
value: 14.097000000000001
- type: map_at_1000
value: 15.354999999999999
- type: map_at_3
value: 8.749
- type: map_at_5
value: 9.974
- type: mrr_at_1
value: 42.105
- type: mrr_at_10
value: 50.589
- type: mrr_at_100
value: 51.187000000000005
- type: mrr_at_1000
value: 51.233
- type: mrr_at_3
value: 48.246
- type: mrr_at_5
value: 49.546
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 31.009999999999998
- type: ndcg_at_100
value: 28.026
- type: ndcg_at_1000
value: 36.905
- type: ndcg_at_3
value: 35.983
- type: ndcg_at_5
value: 33.764
- type: precision_at_1
value: 42.105
- type: precision_at_10
value: 22.786
- type: precision_at_100
value: 6.916
- type: precision_at_1000
value: 1.981
- type: precision_at_3
value: 33.333
- type: precision_at_5
value: 28.731
- type: recall_at_1
value: 5.353
- type: recall_at_10
value: 15.039
- type: recall_at_100
value: 27.348
- type: recall_at_1000
value: 59.453
- type: recall_at_3
value: 9.792
- type: recall_at_5
value: 11.882
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 33.852
- type: map_at_10
value: 48.924
- type: map_at_100
value: 49.854
- type: map_at_1000
value: 49.886
- type: map_at_3
value: 44.9
- type: map_at_5
value: 47.387
- type: mrr_at_1
value: 38.035999999999994
- type: mrr_at_10
value: 51.644
- type: mrr_at_100
value: 52.339
- type: mrr_at_1000
value: 52.35999999999999
- type: mrr_at_3
value: 48.421
- type: mrr_at_5
value: 50.468999999999994
- type: ndcg_at_1
value: 38.007000000000005
- type: ndcg_at_10
value: 56.293000000000006
- type: ndcg_at_100
value: 60.167
- type: ndcg_at_1000
value: 60.916000000000004
- type: ndcg_at_3
value: 48.903999999999996
- type: ndcg_at_5
value: 52.978
- type: precision_at_1
value: 38.007000000000005
- type: precision_at_10
value: 9.041
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11900000000000001
- type: precision_at_3
value: 22.084
- type: precision_at_5
value: 15.608
- type: recall_at_1
value: 33.852
- type: recall_at_10
value: 75.893
- type: recall_at_100
value: 92.589
- type: recall_at_1000
value: 98.153
- type: recall_at_3
value: 56.969
- type: recall_at_5
value: 66.283
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.174
- type: map_at_10
value: 82.891
- type: map_at_100
value: 83.545
- type: map_at_1000
value: 83.56700000000001
- type: map_at_3
value: 79.944
- type: map_at_5
value: 81.812
- type: mrr_at_1
value: 79.67999999999999
- type: mrr_at_10
value: 86.279
- type: mrr_at_100
value: 86.39
- type: mrr_at_1000
value: 86.392
- type: mrr_at_3
value: 85.21
- type: mrr_at_5
value: 85.92999999999999
- type: ndcg_at_1
value: 79.69000000000001
- type: ndcg_at_10
value: 86.929
- type: ndcg_at_100
value: 88.266
- type: ndcg_at_1000
value: 88.428
- type: ndcg_at_3
value: 83.899
- type: ndcg_at_5
value: 85.56700000000001
- type: precision_at_1
value: 79.69000000000001
- type: precision_at_10
value: 13.161000000000001
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.603
- type: precision_at_5
value: 24.138
- type: recall_at_1
value: 69.174
- type: recall_at_10
value: 94.529
- type: recall_at_100
value: 99.15
- type: recall_at_1000
value: 99.925
- type: recall_at_3
value: 85.86200000000001
- type: recall_at_5
value: 90.501
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 39.13064340585255
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 58.97884249325877
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.4680000000000004
- type: map_at_10
value: 7.865
- type: map_at_100
value: 9.332
- type: map_at_1000
value: 9.587
- type: map_at_3
value: 5.800000000000001
- type: map_at_5
value: 6.8790000000000004
- type: mrr_at_1
value: 17.0
- type: mrr_at_10
value: 25.629
- type: mrr_at_100
value: 26.806
- type: mrr_at_1000
value: 26.889000000000003
- type: mrr_at_3
value: 22.8
- type: mrr_at_5
value: 24.26
- type: ndcg_at_1
value: 17.0
- type: ndcg_at_10
value: 13.895
- type: ndcg_at_100
value: 20.491999999999997
- type: ndcg_at_1000
value: 25.759999999999998
- type: ndcg_at_3
value: 13.347999999999999
- type: ndcg_at_5
value: 11.61
- type: precision_at_1
value: 17.0
- type: precision_at_10
value: 7.090000000000001
- type: precision_at_100
value: 1.669
- type: precision_at_1000
value: 0.294
- type: precision_at_3
value: 12.3
- type: precision_at_5
value: 10.02
- type: recall_at_1
value: 3.4680000000000004
- type: recall_at_10
value: 14.363000000000001
- type: recall_at_100
value: 33.875
- type: recall_at_1000
value: 59.711999999999996
- type: recall_at_3
value: 7.483
- type: recall_at_5
value: 10.173
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.04084311714061
- type: cos_sim_spearman
value: 77.51342467443078
- type: euclidean_pearson
value: 80.0321166028479
- type: euclidean_spearman
value: 77.29249114733226
- type: manhattan_pearson
value: 80.03105964262431
- type: manhattan_spearman
value: 77.22373689514794
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.1680158034387
- type: cos_sim_spearman
value: 76.55983344071117
- type: euclidean_pearson
value: 79.75266678300143
- type: euclidean_spearman
value: 75.34516823467025
- type: manhattan_pearson
value: 79.75959151517357
- type: manhattan_spearman
value: 75.42330344141912
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 76.48898993209346
- type: cos_sim_spearman
value: 76.96954120323366
- type: euclidean_pearson
value: 76.94139109279668
- type: euclidean_spearman
value: 76.85860283201711
- type: manhattan_pearson
value: 76.6944095091912
- type: manhattan_spearman
value: 76.61096912972553
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 77.85082366246944
- type: cos_sim_spearman
value: 75.52053350101731
- type: euclidean_pearson
value: 77.1165845070926
- type: euclidean_spearman
value: 75.31216065884388
- type: manhattan_pearson
value: 77.06193941833494
- type: manhattan_spearman
value: 75.31003701700112
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.36305246526497
- type: cos_sim_spearman
value: 87.11704613927415
- type: euclidean_pearson
value: 86.04199125810939
- type: euclidean_spearman
value: 86.51117572414263
- type: manhattan_pearson
value: 86.0805106816633
- type: manhattan_spearman
value: 86.52798366512229
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.18536255599724
- type: cos_sim_spearman
value: 83.63377151025418
- type: euclidean_pearson
value: 83.24657467993141
- type: euclidean_spearman
value: 84.02751481993825
- type: manhattan_pearson
value: 83.11941806582371
- type: manhattan_spearman
value: 83.84251281019304
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 78.95816528475514
- type: cos_sim_spearman
value: 78.86607380120462
- type: euclidean_pearson
value: 78.51268699230545
- type: euclidean_spearman
value: 79.11649316502229
- type: manhattan_pearson
value: 78.32367302808157
- type: manhattan_spearman
value: 78.90277699624637
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.89126914997624
- type: cos_sim_spearman
value: 73.0296921832678
- type: euclidean_pearson
value: 71.50385903677738
- type: euclidean_spearman
value: 73.13368899716289
- type: manhattan_pearson
value: 71.47421463379519
- type: manhattan_spearman
value: 73.03383242946575
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 59.22923684492637
- type: cos_sim_spearman
value: 57.41013211368396
- type: euclidean_pearson
value: 61.21107388080905
- type: euclidean_spearman
value: 60.07620768697254
- type: manhattan_pearson
value: 59.60157142786555
- type: manhattan_spearman
value: 59.14069604103739
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.24345978774299
- type: cos_sim_spearman
value: 77.24225743830719
- type: euclidean_pearson
value: 76.66226095469165
- type: euclidean_spearman
value: 77.60708820493146
- type: manhattan_pearson
value: 76.05303324760429
- type: manhattan_spearman
value: 76.96353149912348
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.50879160160852
- type: cos_sim_spearman
value: 86.43594662965224
- type: euclidean_pearson
value: 86.06846012826577
- type: euclidean_spearman
value: 86.02041395794136
- type: manhattan_pearson
value: 86.10916255616904
- type: manhattan_spearman
value: 86.07346068198953
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 58.39803698977196
- type: cos_sim_spearman
value: 55.96910950423142
- type: euclidean_pearson
value: 58.17941175613059
- type: euclidean_spearman
value: 55.03019330522745
- type: manhattan_pearson
value: 57.333358138183286
- type: manhattan_spearman
value: 54.04614023149965
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 70.98304089637197
- type: cos_sim_spearman
value: 72.44071656215888
- type: euclidean_pearson
value: 72.19224359033983
- type: euclidean_spearman
value: 73.89871188913025
- type: manhattan_pearson
value: 71.21098311547406
- type: manhattan_spearman
value: 72.93405764824821
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.99792397466308
- type: cos_sim_spearman
value: 84.83824377879495
- type: euclidean_pearson
value: 85.70043288694438
- type: euclidean_spearman
value: 84.70627558703686
- type: manhattan_pearson
value: 85.89570850150801
- type: manhattan_spearman
value: 84.95806105313007
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.21850322994712
- type: cos_sim_spearman
value: 72.28669398117248
- type: euclidean_pearson
value: 73.40082510412948
- type: euclidean_spearman
value: 73.0326539281865
- type: manhattan_pearson
value: 71.8659633964841
- type: manhattan_spearman
value: 71.57817425823303
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.80921368595645
- type: cos_sim_spearman
value: 77.33209091229315
- type: euclidean_pearson
value: 76.53159540154829
- type: euclidean_spearman
value: 78.17960842810093
- type: manhattan_pearson
value: 76.13530186637601
- type: manhattan_spearman
value: 78.00701437666875
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.74980608267349
- type: cos_sim_spearman
value: 75.37597374318821
- type: euclidean_pearson
value: 74.90506081911661
- type: euclidean_spearman
value: 75.30151613124521
- type: manhattan_pearson
value: 74.62642745918002
- type: manhattan_spearman
value: 75.18619716592303
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.632662289205584
- type: cos_sim_spearman
value: 60.938543391610914
- type: euclidean_pearson
value: 62.113200529767056
- type: euclidean_spearman
value: 61.410312633261164
- type: manhattan_pearson
value: 61.75494698945686
- type: manhattan_spearman
value: 60.92726195322362
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.283470551557244
- type: cos_sim_spearman
value: 53.44833015864201
- type: euclidean_pearson
value: 41.17892011120893
- type: euclidean_spearman
value: 53.81441383126767
- type: manhattan_pearson
value: 41.17482200420659
- type: manhattan_spearman
value: 53.82180269276363
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.5069165306236
- type: cos_sim_spearman
value: 66.87803259033826
- type: euclidean_pearson
value: 63.5428979418236
- type: euclidean_spearman
value: 66.9293576586897
- type: manhattan_pearson
value: 63.59789526178922
- type: manhattan_spearman
value: 66.86555009875066
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.23026196280264
- type: cos_sim_spearman
value: 35.79397812652861
- type: euclidean_pearson
value: 17.828102102767353
- type: euclidean_spearman
value: 35.721501145568894
- type: manhattan_pearson
value: 17.77134274219677
- type: manhattan_spearman
value: 35.98107902846267
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 56.51946541393812
- type: cos_sim_spearman
value: 63.714686006214485
- type: euclidean_pearson
value: 58.32104651305898
- type: euclidean_spearman
value: 62.237110895702216
- type: manhattan_pearson
value: 58.579416468759185
- type: manhattan_spearman
value: 62.459738981727
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.76009839569795
- type: cos_sim_spearman
value: 56.65188431953149
- type: euclidean_pearson
value: 50.997682160915595
- type: euclidean_spearman
value: 55.99910008818135
- type: manhattan_pearson
value: 50.76220659606342
- type: manhattan_spearman
value: 55.517347595391456
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.232731157702425
- type: cos_sim_spearman
value: 59.89531877658345
- type: euclidean_pearson
value: 49.937914570348376
- type: euclidean_spearman
value: 60.220905659334036
- type: manhattan_pearson
value: 50.00987996844193
- type: manhattan_spearman
value: 60.081341480977926
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.717524559088005
- type: cos_sim_spearman
value: 66.83570886252286
- type: euclidean_pearson
value: 58.41338625505467
- type: euclidean_spearman
value: 66.68991427704938
- type: manhattan_pearson
value: 58.78638572916807
- type: manhattan_spearman
value: 66.58684161046335
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.2962042954962
- type: cos_sim_spearman
value: 76.58255504852025
- type: euclidean_pearson
value: 75.70983192778257
- type: euclidean_spearman
value: 77.4547684870542
- type: manhattan_pearson
value: 75.75565853870485
- type: manhattan_spearman
value: 76.90208974949428
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.47396266924846
- type: cos_sim_spearman
value: 56.492267162048606
- type: euclidean_pearson
value: 55.998505203070195
- type: euclidean_spearman
value: 56.46447012960222
- type: manhattan_pearson
value: 54.873172394430995
- type: manhattan_spearman
value: 56.58111534551218
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 69.87177267688686
- type: cos_sim_spearman
value: 74.57160943395763
- type: euclidean_pearson
value: 70.88330406826788
- type: euclidean_spearman
value: 74.29767636038422
- type: manhattan_pearson
value: 71.38245248369536
- type: manhattan_spearman
value: 74.53102232732175
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.80225656959544
- type: cos_sim_spearman
value: 76.52646173725735
- type: euclidean_pearson
value: 73.95710720200799
- type: euclidean_spearman
value: 76.54040031984111
- type: manhattan_pearson
value: 73.89679971946774
- type: manhattan_spearman
value: 76.60886958161574
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.70844249898789
- type: cos_sim_spearman
value: 72.68571783670241
- type: euclidean_pearson
value: 72.38800772441031
- type: euclidean_spearman
value: 72.86804422703312
- type: manhattan_pearson
value: 71.29840508203515
- type: manhattan_spearman
value: 71.86264441749513
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.647478923935694
- type: cos_sim_spearman
value: 63.74453623540931
- type: euclidean_pearson
value: 59.60138032437505
- type: euclidean_spearman
value: 63.947930832166065
- type: manhattan_pearson
value: 58.59735509491861
- type: manhattan_spearman
value: 62.082503844627404
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 65.8722516867162
- type: cos_sim_spearman
value: 71.81208592523012
- type: euclidean_pearson
value: 67.95315252165956
- type: euclidean_spearman
value: 73.00749822046009
- type: manhattan_pearson
value: 68.07884688638924
- type: manhattan_spearman
value: 72.34210325803069
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.5405814240949
- type: cos_sim_spearman
value: 60.56838649023775
- type: euclidean_pearson
value: 53.011731611314104
- type: euclidean_spearman
value: 58.533194841668426
- type: manhattan_pearson
value: 53.623067729338494
- type: manhattan_spearman
value: 58.018756154446926
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 13.611046866216112
- type: cos_sim_spearman
value: 28.238192909158492
- type: euclidean_pearson
value: 22.16189199885129
- type: euclidean_spearman
value: 35.012895679076564
- type: manhattan_pearson
value: 21.969771178698387
- type: manhattan_spearman
value: 32.456985088607475
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 74.58077407011655
- type: cos_sim_spearman
value: 84.51542547285167
- type: euclidean_pearson
value: 74.64613843596234
- type: euclidean_spearman
value: 84.51542547285167
- type: manhattan_pearson
value: 75.15335973101396
- type: manhattan_spearman
value: 84.51542547285167
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 82.0739825531578
- type: cos_sim_spearman
value: 84.01057479311115
- type: euclidean_pearson
value: 83.85453227433344
- type: euclidean_spearman
value: 84.01630226898655
- type: manhattan_pearson
value: 83.75323603028978
- type: manhattan_spearman
value: 83.89677983727685
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 78.12945623123957
- type: mrr
value: 93.87738713719106
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.983000000000004
- type: map_at_10
value: 62.946000000000005
- type: map_at_100
value: 63.514
- type: map_at_1000
value: 63.554
- type: map_at_3
value: 60.183
- type: map_at_5
value: 61.672000000000004
- type: mrr_at_1
value: 55.667
- type: mrr_at_10
value: 64.522
- type: mrr_at_100
value: 64.957
- type: mrr_at_1000
value: 64.995
- type: mrr_at_3
value: 62.388999999999996
- type: mrr_at_5
value: 63.639
- type: ndcg_at_1
value: 55.667
- type: ndcg_at_10
value: 67.704
- type: ndcg_at_100
value: 70.299
- type: ndcg_at_1000
value: 71.241
- type: ndcg_at_3
value: 62.866
- type: ndcg_at_5
value: 65.16999999999999
- type: precision_at_1
value: 55.667
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.053
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 24.444
- type: precision_at_5
value: 16.133
- type: recall_at_1
value: 52.983000000000004
- type: recall_at_10
value: 80.656
- type: recall_at_100
value: 92.5
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 67.744
- type: recall_at_5
value: 73.433
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.72772277227723
- type: cos_sim_ap
value: 92.17845897992215
- type: cos_sim_f1
value: 85.9746835443038
- type: cos_sim_precision
value: 87.07692307692308
- type: cos_sim_recall
value: 84.89999999999999
- type: dot_accuracy
value: 99.3039603960396
- type: dot_ap
value: 60.70244020124878
- type: dot_f1
value: 59.92742353551063
- type: dot_precision
value: 62.21743810548978
- type: dot_recall
value: 57.8
- type: euclidean_accuracy
value: 99.71683168316832
- type: euclidean_ap
value: 91.53997039964659
- type: euclidean_f1
value: 84.88372093023257
- type: euclidean_precision
value: 90.02242152466367
- type: euclidean_recall
value: 80.30000000000001
- type: manhattan_accuracy
value: 99.72376237623763
- type: manhattan_ap
value: 91.80756777790289
- type: manhattan_f1
value: 85.48468106479157
- type: manhattan_precision
value: 85.8728557013118
- type: manhattan_recall
value: 85.1
- type: max_accuracy
value: 99.72772277227723
- type: max_ap
value: 92.17845897992215
- type: max_f1
value: 85.9746835443038
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 53.52464042600003
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.071631948736
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.19552407604654
- type: mrr
value: 49.95269130379425
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.345293033095427
- type: cos_sim_spearman
value: 29.976931423258403
- type: dot_pearson
value: 27.047078008958408
- type: dot_spearman
value: 27.75894368380218
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 1.706
- type: map_at_100
value: 9.634
- type: map_at_1000
value: 23.665
- type: map_at_3
value: 0.5950000000000001
- type: map_at_5
value: 0.95
- type: mrr_at_1
value: 86.0
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91.0
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 80.0
- type: ndcg_at_10
value: 72.573
- type: ndcg_at_100
value: 53.954
- type: ndcg_at_1000
value: 47.760999999999996
- type: ndcg_at_3
value: 76.173
- type: ndcg_at_5
value: 75.264
- type: precision_at_1
value: 86.0
- type: precision_at_10
value: 76.4
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.802
- type: precision_at_3
value: 81.333
- type: precision_at_5
value: 80.4
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 1.925
- type: recall_at_100
value: 12.762
- type: recall_at_1000
value: 44.946000000000005
- type: recall_at_3
value: 0.634
- type: recall_at_5
value: 1.051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.0
- type: f1
value: 88.55666666666666
- type: precision
value: 87.46166666666667
- type: recall
value: 91.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.22543352601156
- type: f1
value: 51.03220478943021
- type: precision
value: 48.8150289017341
- type: recall
value: 57.22543352601156
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.58536585365854
- type: f1
value: 39.66870798578116
- type: precision
value: 37.416085946573745
- type: recall
value: 46.58536585365854
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.7
- type: f1
value: 86.77999999999999
- type: precision
value: 85.45333333333332
- type: recall
value: 89.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.58333333333331
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.3
- type: precision
value: 89.31666666666668
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.67190476190476
- type: precision
value: 82.23333333333332
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.0
- type: f1
value: 42.23229092632078
- type: precision
value: 39.851634683724235
- type: recall
value: 50.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.3
- type: f1
value: 70.86190476190477
- type: precision
value: 68.68777777777777
- type: recall
value: 76.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.073170731707314
- type: f1
value: 50.658958927251604
- type: precision
value: 48.26480836236933
- type: recall
value: 57.073170731707314
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.2
- type: f1
value: 62.156507936507936
- type: precision
value: 59.84964285714286
- type: recall
value: 68.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.52126366950182
- type: f1
value: 72.8496210148701
- type: precision
value: 70.92171498003819
- type: recall
value: 77.52126366950182
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.78260869565217
- type: f1
value: 65.32422360248447
- type: precision
value: 63.063067367415194
- type: recall
value: 70.78260869565217
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.43478260869566
- type: f1
value: 73.02608695652172
- type: precision
value: 70.63768115942028
- type: recall
value: 78.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.9
- type: f1
value: 55.309753694581275
- type: precision
value: 53.130476190476195
- type: recall
value: 60.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.89999999999999
- type: f1
value: 67.92023809523809
- type: precision
value: 65.82595238095237
- type: recall
value: 72.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.80337756332931
- type: f1
value: 39.42174900558496
- type: precision
value: 36.97101116280851
- type: recall
value: 46.80337756332931
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.8
- type: f1
value: 86.79
- type: precision
value: 85.375
- type: recall
value: 89.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.199999999999996
- type: f1
value: 39.95484348984349
- type: precision
value: 37.561071428571424
- type: recall
value: 47.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.8
- type: f1
value: 84.68190476190475
- type: precision
value: 83.275
- type: recall
value: 87.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.76190476190476
- type: f1
value: 42.14965986394558
- type: precision
value: 39.96743626743626
- type: recall
value: 48.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.10000000000001
- type: f1
value: 59.58580086580086
- type: precision
value: 57.150238095238095
- type: recall
value: 66.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.3
- type: f1
value: 84.0
- type: precision
value: 82.48666666666666
- type: recall
value: 87.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 87.79523809523809
- type: precision
value: 86.6
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.0
- type: f1
value: 83.81
- type: precision
value: 82.36666666666666
- type: recall
value: 87.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.9
- type: f1
value: 57.76533189033189
- type: precision
value: 55.50595238095239
- type: recall
value: 63.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.1
- type: f1
value: 71.83690476190478
- type: precision
value: 70.04928571428573
- type: recall
value: 76.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.3
- type: f1
value: 59.32626984126984
- type: precision
value: 56.62535714285713
- type: recall
value: 66.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.60000000000001
- type: f1
value: 87.96333333333334
- type: precision
value: 86.73333333333333
- type: recall
value: 90.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.16666666666666
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.71428571428571
- type: f1
value: 82.29142600436403
- type: precision
value: 80.8076626877166
- type: recall
value: 85.71428571428571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.88888888888889
- type: f1
value: 85.7834757834758
- type: precision
value: 84.43732193732193
- type: recall
value: 88.88888888888889
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 85.67190476190476
- type: precision
value: 84.43333333333332
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.72727272727273
- type: f1
value: 78.21969696969695
- type: precision
value: 76.18181818181819
- type: recall
value: 82.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 61.0062893081761
- type: f1
value: 55.13976240391334
- type: precision
value: 52.92112499659669
- type: recall
value: 61.0062893081761
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.86666666666666
- type: precision
value: 85.69166666666668
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.54085603112841
- type: f1
value: 68.56031128404669
- type: precision
value: 66.53047989623866
- type: recall
value: 73.54085603112841
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.58974358974359
- type: f1
value: 36.45299145299145
- type: precision
value: 33.81155881155882
- type: recall
value: 43.58974358974359
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.599999999999994
- type: f1
value: 53.264689754689755
- type: precision
value: 50.869166666666665
- type: recall
value: 59.599999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.2
- type: f1
value: 81.61666666666665
- type: precision
value: 80.02833333333335
- type: recall
value: 85.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.78504672897196
- type: f1
value: 58.00029669188548
- type: precision
value: 55.815809968847354
- type: recall
value: 63.78504672897196
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.5
- type: f1
value: 61.518333333333345
- type: precision
value: 59.622363699102834
- type: recall
value: 66.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 85.60222222222221
- type: precision
value: 84.27916666666665
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.699999999999996
- type: f1
value: 52.732375957375965
- type: precision
value: 50.63214035964035
- type: recall
value: 58.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 89.99666666666667
- type: precision
value: 89.03333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.10000000000001
- type: f1
value: 87.55666666666667
- type: precision
value: 86.36166666666668
- type: recall
value: 90.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 88.89000000000001
- type: precision
value: 87.71166666666666
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.7
- type: f1
value: 60.67427750410509
- type: precision
value: 58.71785714285714
- type: recall
value: 65.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 81.93190476190475
- type: precision
value: 80.37833333333333
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.833333333333336
- type: f1
value: 42.006625781625786
- type: precision
value: 40.077380952380956
- type: recall
value: 47.833333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.4
- type: f1
value: 8.24465007215007
- type: precision
value: 7.664597069597071
- type: recall
value: 10.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.6
- type: f1
value: 77.76333333333334
- type: precision
value: 75.57833333333332
- type: recall
value: 82.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.67857142857143
- type: f1
value: 44.302721088435376
- type: precision
value: 41.49801587301587
- type: recall
value: 52.67857142857143
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.3205268935236
- type: f1
value: 22.426666605171157
- type: precision
value: 20.685900116470915
- type: recall
value: 28.3205268935236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 22.7
- type: f1
value: 17.833970473970474
- type: precision
value: 16.407335164835164
- type: recall
value: 22.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.92999999999999
- type: precision
value: 88.87
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.25
- type: precision
value: 88.21666666666667
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.19999999999999
- type: f1
value: 63.38269841269841
- type: precision
value: 61.14773809523809
- type: recall
value: 69.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.8
- type: f1
value: 42.839915639915645
- type: precision
value: 40.770287114845935
- type: recall
value: 48.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.8
- type: f1
value: 85.90666666666668
- type: precision
value: 84.54166666666666
- type: recall
value: 88.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 46.6
- type: f1
value: 40.85892920804686
- type: precision
value: 38.838223114604695
- type: recall
value: 46.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.0
- type: f1
value: 80.14190476190475
- type: precision
value: 78.45333333333333
- type: recall
value: 84.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5
- type: f1
value: 87.78333333333333
- type: precision
value: 86.5
- type: recall
value: 90.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.5
- type: f1
value: 69.48397546897547
- type: precision
value: 67.51869047619049
- type: recall
value: 74.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.846715328467155
- type: f1
value: 27.828177499710343
- type: precision
value: 26.63451511991658
- type: recall
value: 32.846715328467155
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.0
- type: f1
value: 6.07664116764988
- type: precision
value: 5.544177607179943
- type: recall
value: 8.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.38555555555554
- type: precision
value: 82.91583333333334
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 84.08333333333331
- type: precision
value: 82.47333333333333
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.95238095238095
- type: f1
value: 76.13095238095238
- type: precision
value: 74.05753968253967
- type: recall
value: 80.95238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.971422975172975
- type: precision
value: 6.557814916172301
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.099378881987576
- type: f1
value: 37.01649742022413
- type: precision
value: 34.69420618488942
- type: recall
value: 44.099378881987576
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.32666666666667
- type: precision
value: 78.60666666666665
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.5
- type: f1
value: 90.49666666666666
- type: precision
value: 89.56666666666668
- type: recall
value: 92.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 10.0
- type: f1
value: 8.268423529875141
- type: precision
value: 7.878118605532398
- type: recall
value: 10.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.22077922077922
- type: f1
value: 74.27128427128426
- type: precision
value: 72.28715728715729
- type: recall
value: 79.22077922077922
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.64885496183206
- type: f1
value: 58.87495456197747
- type: precision
value: 55.992366412213734
- type: recall
value: 65.64885496183206
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.06986899563319
- type: f1
value: 94.78408539543909
- type: precision
value: 94.15332362930616
- type: recall
value: 96.06986899563319
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.2
- type: f1
value: 71.72571428571428
- type: precision
value: 69.41000000000001
- type: recall
value: 77.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.4406779661017
- type: f1
value: 83.2391713747646
- type: precision
value: 81.74199623352166
- type: recall
value: 86.4406779661017
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.4
- type: f1
value: 6.017828743398003
- type: precision
value: 5.4829865484756795
- type: recall
value: 8.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.5
- type: f1
value: 79.74833333333333
- type: precision
value: 78.04837662337664
- type: recall
value: 83.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.4
- type: f1
value: 54.467301587301584
- type: precision
value: 52.23242424242424
- type: recall
value: 60.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.9
- type: f1
value: 69.68699134199134
- type: precision
value: 67.59873015873016
- type: recall
value: 74.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.9652380952381
- type: precision
value: 83.66166666666666
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.1
- type: f1
value: 7.681244588744588
- type: precision
value: 7.370043290043291
- type: recall
value: 9.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9651474530831
- type: f1
value: 76.84220605132133
- type: precision
value: 75.19606398962966
- type: recall
value: 80.9651474530831
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.9
- type: f1
value: 83.705
- type: precision
value: 82.3120634920635
- type: recall
value: 86.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.64426877470356
- type: f1
value: 23.98763072676116
- type: precision
value: 22.506399397703746
- type: recall
value: 29.64426877470356
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.4225352112676
- type: f1
value: 62.84037558685445
- type: precision
value: 59.56572769953053
- type: recall
value: 70.4225352112676
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 19.64071856287425
- type: f1
value: 15.125271011207756
- type: precision
value: 13.865019261197494
- type: recall
value: 19.64071856287425
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.80666666666666
- type: precision
value: 86.70833333333331
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 18.407224958949097
- type: precision
value: 16.982385430661292
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.98591549295775
- type: f1
value: 49.94718309859154
- type: precision
value: 47.77864154624717
- type: recall
value: 55.98591549295775
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.07692307692307
- type: f1
value: 66.74358974358974
- type: precision
value: 64.06837606837607
- type: recall
value: 73.07692307692307
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.25
- type: precision
value: 92.43333333333332
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.78705636743215
- type: f1
value: 31.63899658680452
- type: precision
value: 29.72264397629742
- type: recall
value: 37.78705636743215
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.6
- type: f1
value: 16.91697302697303
- type: precision
value: 15.71225147075147
- type: recall
value: 21.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.01628664495115
- type: f1
value: 81.38514037536838
- type: precision
value: 79.83170466883823
- type: recall
value: 85.01628664495115
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.39999999999999
- type: f1
value: 79.96380952380952
- type: precision
value: 78.48333333333333
- type: recall
value: 83.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.2
- type: f1
value: 79.26190476190476
- type: precision
value: 77.58833333333334
- type: recall
value: 83.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.59055118110236
- type: f1
value: 71.66854143232096
- type: precision
value: 70.30183727034121
- type: recall
value: 75.59055118110236
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.5
- type: f1
value: 59.26095238095238
- type: precision
value: 56.81909090909092
- type: recall
value: 65.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.26315789473685
- type: f1
value: 47.986523325858506
- type: precision
value: 45.33950006595436
- type: recall
value: 55.26315789473685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.89999999999999
- type: f1
value: 78.835
- type: precision
value: 77.04761904761905
- type: recall
value: 82.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 43.269230769230774
- type: f1
value: 36.20421245421245
- type: precision
value: 33.57371794871795
- type: recall
value: 43.269230769230774
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.0
- type: f1
value: 84.70666666666666
- type: precision
value: 83.23166666666665
- type: recall
value: 88.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.4
- type: f1
value: 72.54666666666667
- type: precision
value: 70.54318181818181
- type: recall
value: 77.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.60000000000001
- type: f1
value: 74.1588888888889
- type: precision
value: 72.30250000000001
- type: recall
value: 78.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.40566037735849
- type: f1
value: 66.82587328813744
- type: precision
value: 64.75039308176099
- type: recall
value: 72.40566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.8
- type: f1
value: 68.56357142857144
- type: precision
value: 66.3178822055138
- type: recall
value: 73.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.78832116788321
- type: f1
value: 89.3552311435523
- type: precision
value: 88.20559610705597
- type: recall
value: 91.78832116788321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.05085581085581
- type: precision
value: 66.955
- type: recall
value: 74.3
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.896
- type: map_at_10
value: 8.993
- type: map_at_100
value: 14.133999999999999
- type: map_at_1000
value: 15.668000000000001
- type: map_at_3
value: 5.862
- type: map_at_5
value: 7.17
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 42.931000000000004
- type: mrr_at_100
value: 44.81
- type: mrr_at_1000
value: 44.81
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 41.701
- type: ndcg_at_1
value: 31.633
- type: ndcg_at_10
value: 21.163
- type: ndcg_at_100
value: 33.306000000000004
- type: ndcg_at_1000
value: 45.275999999999996
- type: ndcg_at_3
value: 25.685999999999996
- type: ndcg_at_5
value: 23.732
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 17.755000000000003
- type: precision_at_100
value: 6.938999999999999
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 25.85
- type: precision_at_5
value: 23.265
- type: recall_at_1
value: 2.896
- type: recall_at_10
value: 13.333999999999998
- type: recall_at_100
value: 43.517
- type: recall_at_1000
value: 79.836
- type: recall_at_3
value: 6.306000000000001
- type: recall_at_5
value: 8.825
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.3874
- type: ap
value: 13.829909072469423
- type: f1
value: 53.54534203543492
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.62026032823995
- type: f1
value: 62.85251350485221
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 33.21527881409797
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.97943613280086
- type: cos_sim_ap
value: 70.75454316885921
- type: cos_sim_f1
value: 65.38274012676743
- type: cos_sim_precision
value: 60.761214318078835
- type: cos_sim_recall
value: 70.76517150395777
- type: dot_accuracy
value: 79.0546581629612
- type: dot_ap
value: 47.3197121792147
- type: dot_f1
value: 49.20106524633821
- type: dot_precision
value: 42.45499808502489
- type: dot_recall
value: 58.49604221635884
- type: euclidean_accuracy
value: 85.08076533349228
- type: euclidean_ap
value: 70.95016106374474
- type: euclidean_f1
value: 65.43987900176455
- type: euclidean_precision
value: 62.64478764478765
- type: euclidean_recall
value: 68.49604221635884
- type: manhattan_accuracy
value: 84.93771234428085
- type: manhattan_ap
value: 70.63668388755362
- type: manhattan_f1
value: 65.23895401262398
- type: manhattan_precision
value: 56.946084218811485
- type: manhattan_recall
value: 76.35883905013192
- type: max_accuracy
value: 85.08076533349228
- type: max_ap
value: 70.95016106374474
- type: max_f1
value: 65.43987900176455
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.69096130709822
- type: cos_sim_ap
value: 84.82526278228542
- type: cos_sim_f1
value: 77.65485060585536
- type: cos_sim_precision
value: 75.94582658619167
- type: cos_sim_recall
value: 79.44256236526024
- type: dot_accuracy
value: 80.97954748321496
- type: dot_ap
value: 64.81642914145866
- type: dot_f1
value: 60.631996987229975
- type: dot_precision
value: 54.5897293631712
- type: dot_recall
value: 68.17831844779796
- type: euclidean_accuracy
value: 88.6987231730508
- type: euclidean_ap
value: 84.80003825477253
- type: euclidean_f1
value: 77.67194179854496
- type: euclidean_precision
value: 75.7128235122094
- type: euclidean_recall
value: 79.73514012935017
- type: manhattan_accuracy
value: 88.62692591298949
- type: manhattan_ap
value: 84.80451408255276
- type: manhattan_f1
value: 77.69888949572183
- type: manhattan_precision
value: 73.70311528631622
- type: manhattan_recall
value: 82.15275639051433
- type: max_accuracy
value: 88.6987231730508
- type: max_ap
value: 84.82526278228542
- type: max_f1
value: 77.69888949572183
---
## Multilingual-E5-small
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 384.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-small')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-small')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-small')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens. | [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
mogaio/pr_ebsa_fr_tran_merged25_e5_middle_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T18:24:37 | 2023-12-15T18:25:53 | 49 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.923784494086728
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9251101321585903
recall: 0.8898305084745762
f1-score: 0.9071274298056154
support: 236
'1':
precision: 0.9081967213114754
recall: 0.920265780730897
f1-score: 0.9141914191419142
support: 301
'2':
precision: 0.9432314410480349
recall: 0.9642857142857143
f1-score: 0.9536423841059601
support: 224
accuracy: 0.923784494086728
macro avg:
precision: 0.9255127648393668
recall: 0.9247940011637291
f1-score: 0.9249870776844965
support: 761
weighted avg:
precision: 0.9237543325873079
recall: 0.923784494086728
f1-score: 0.9236131204146865
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9238 | {'0': {'precision': 0.9251101321585903, 'recall': 0.8898305084745762, 'f1-score': 0.9071274298056154, 'support': 236}, '1': {'precision': 0.9081967213114754, 'recall': 0.920265780730897, 'f1-score': 0.9141914191419142, 'support': 301}, '2': {'precision': 0.9432314410480349, 'recall': 0.9642857142857143, 'f1-score': 0.9536423841059601, 'support': 224}, 'accuracy': 0.923784494086728, 'macro avg': {'precision': 0.9255127648393668, 'recall': 0.9247940011637291, 'f1-score': 0.9249870776844965, 'support': 761}, 'weighted avg': {'precision': 0.9237543325873079, 'recall': 0.923784494086728, 'f1-score': 0.9236131204146865, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e5_middle_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
mogaio/pr_ebsa_fr_tran_merged25_e1_middle_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T18:59:40 | 2023-12-15T19:01:07 | 49 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.9434954007884363
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9361702127659575
recall: 0.9322033898305084
f1-score: 0.9341825902335456
support: 236
'1':
precision: 0.9333333333333333
recall: 0.9302325581395349
f1-score: 0.9317803660565723
support: 301
'2':
precision: 0.9646017699115044
recall: 0.9732142857142857
f1-score: 0.9688888888888889
support: 224
accuracy: 0.9434954007884363
macro avg:
precision: 0.9447017720035985
recall: 0.945216744561443
f1-score: 0.9449506150596689
support: 761
weighted avg:
precision: 0.9434169513880108
recall: 0.9434954007884363
f1-score: 0.9434482162802315
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9435 | {'0': {'precision': 0.9361702127659575, 'recall': 0.9322033898305084, 'f1-score': 0.9341825902335456, 'support': 236}, '1': {'precision': 0.9333333333333333, 'recall': 0.9302325581395349, 'f1-score': 0.9317803660565723, 'support': 301}, '2': {'precision': 0.9646017699115044, 'recall': 0.9732142857142857, 'f1-score': 0.9688888888888889, 'support': 224}, 'accuracy': 0.9434954007884363, 'macro avg': {'precision': 0.9447017720035985, 'recall': 0.945216744561443, 'f1-score': 0.9449506150596689, 'support': 761}, 'weighted avg': {'precision': 0.9434169513880108, 'recall': 0.9434954007884363, 'f1-score': 0.9434482162802315, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e1_middle_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
| 0.0013 | 1 | 0.0015 | - |
| 0.0658 | 50 | 0.1973 | - |
| 0.1316 | 100 | 0.0106 | - |
| 0.1974 | 150 | 0.0744 | - |
| 0.2632 | 200 | 0.1033 | - |
| 0.3289 | 250 | 0.0425 | - |
| 0.3947 | 300 | 0.1125 | - |
| 0.4605 | 350 | 0.0018 | - |
| 0.5263 | 400 | 0.0019 | - |
| 0.5921 | 450 | 0.0002 | - |
| 0.6579 | 500 | 0.0007 | - |
| 0.7237 | 550 | 0.1393 | - |
| 0.7895 | 600 | 0.0002 | - |
| 0.8553 | 650 | 0.0043 | - |
| 0.9211 | 700 | 0.0339 | - |
| 0.9868 | 750 | 0.0002 | - |
| 0.0013 | 1 | 0.0007 | - |
| 0.0658 | 50 | 0.0419 | - |
| 0.1316 | 100 | 0.0068 | - |
| 0.1974 | 150 | 0.1401 | - |
| 0.2632 | 200 | 0.0423 | - |
| 0.3289 | 250 | 0.1122 | - |
| 0.3947 | 300 | 0.0037 | - |
| 0.4605 | 350 | 0.005 | - |
| 0.5263 | 400 | 0.0006 | - |
| 0.5921 | 450 | 0.0006 | - |
| 0.6579 | 500 | 0.0016 | - |
| 0.7237 | 550 | 0.1244 | - |
| 0.7895 | 600 | 0.0016 | - |
| 0.8553 | 650 | 0.0028 | - |
| 0.9211 | 700 | 0.002 | - |
| 0.9868 | 750 | 0.057 | - |
| 0.0013 | 1 | 0.1396 | - |
| 0.0658 | 50 | 0.0366 | - |
| 0.1316 | 100 | 0.0021 | - |
| 0.1974 | 150 | 0.1088 | - |
| 0.2632 | 200 | 0.0449 | - |
| 0.3289 | 250 | 0.0187 | - |
| 0.3947 | 300 | 0.0017 | - |
| 0.4605 | 350 | 0.1262 | - |
| 0.5263 | 400 | 0.0052 | - |
| 0.5921 | 450 | 0.1188 | - |
| 0.6579 | 500 | 0.0002 | - |
| 0.7237 | 550 | 0.0006 | - |
| 0.7895 | 600 | 0.0758 | - |
| 0.8553 | 650 | 0.025 | - |
| 0.9211 | 700 | 0.0052 | - |
| 0.9868 | 750 | 0.1985 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
mrzaizai2k/distilroberta-base-sentence-transformer-triplets | mrzaizai2k | sentence-similarity | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:101762",
"loss:TripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:sentence-transformers/distiluse-base-multilingual-cased-v2",
"base_model:finetune:sentence-transformers/distiluse-base-multilingual-cased-v2",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-12T07:29:16 | 2024-07-12T07:29:59 | 49 | 0 | ---
base_model: sentence-transformers/distiluse-base-multilingual-cased-v2
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:101762
- loss:TripletLoss
widget:
- source_sentence: How do I clean the screen of my Toshiba TV?
sentences:
- How can I clear screen overlay from my Samsung Galaxy 6?
- Why do police forces exist?
- What is the best way to clean a flat screen monitor?
- source_sentence: What was the first video you watched on YouTube?
sentences:
- What was the first Youtube video you ever watched?
- What was the first music video ever produced?
- What was the long term effect of Hitler's desire to exterminate the Jewish people?
- source_sentence: What should I do to recover my data from a hard disk?
sentences:
- How do I recover my deleted data files from a hard disk?
- What's the best Linux operating System distro for beginners?
- Formated Data Recovery – Recover Data from Memory Card, Disk Drive, USB, External
Drive?
- source_sentence: What are your personal top ten music albums of all time?
sentences:
- What are your top 10 favourite songs of all time?
- What are the Top 10 music albums of all time on your list?
- What stream should I take in 11th if I have to become an automobile engineer?
- source_sentence: What is the best website to learn coding independently?
sentences:
- What are some of the best website to learn programming from being a total beginner?
- What books do I need to read to learn more about Sufism?
- What is the best (and fastest) way to learn how to code (web development)?
---
# SentenceTransformer based on sentence-transformers/distiluse-base-multilingual-cased-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) <!-- at revision 03a0532331151aeb3e1d2e602ffad62bb212a38d -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("chibao24/distilroberta-base-sentence-transformer-triplets")
# Run inference
sentences = [
'What is the best website to learn coding independently?',
'What are some of the best website to learn programming from being a total beginner?',
'What is the best (and fastest) way to learn how to code (web development)?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 101,762 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | sentence_2 |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.7 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.66 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.22 tokens</li><li>max: 84 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 | sentence_2 |
|:-------------------------------------------------------------------------------|:----------------------------------------------------------------------|:------------------------------------------------------------|
| <code>What are the differences between "be made of" and "be made from"?</code> | <code>What's the difference between "made of" and "made from"?</code> | <code>What is the difference between make and craft?</code> |
| <code>How can we use the word "inertia" in a sentence?</code> | <code>How can the word "inertia" be used in a sentence?</code> | <code>What is inertia actually?</code> |
| <code>Who are the new (i.e. first-time) Top Question Writers for 2017?</code> | <code>Who are the top question writers for 2017?</code> | <code>Who are the 2016 Top Writers?</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `num_train_epochs`: 4
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.6281 | 500 | 4.2255 |
| 1.2563 | 1000 | 3.484 |
| 1.8844 | 1500 | 2.8611 |
| 2.5126 | 2000 | 2.4607 |
| 3.1407 | 2500 | 2.148 |
| 3.7688 | 3000 | 1.8583 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CRAFT"
] |
maastrichtlawtech/distilcamembert-lleqa | maastrichtlawtech | sentence-similarity | [
"sentence-transformers",
"pytorch",
"safetensors",
"camembert",
"feature-extraction",
"sentence-similarity",
"fr",
"dataset:maastrichtlawtech/lleqa",
"arxiv:2309.17050",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-28T13:03:19 | 2024-10-31T12:58:12 | 48 | 3 | ---
datasets:
- maastrichtlawtech/lleqa
language: fr
library_name: sentence-transformers
license: apache-2.0
metrics:
- recall
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
inference: true
widget:
- source_sentence: Je reçois des confidences liées à mon emploi. Qu'est-ce que je
risque si je viole le secret professionnel ?
sentences:
- 'Art. 1 : Les médecins, chirurgiens, officiers de santé, pharmaciens, sages-femmes
et toutes autres personnes dépositaires, par état ou par profession, des secrets
qu''on leur confie, qui, hors le cas où ils sont appelés à rendre témoignage en
justice ou devant une commission d''enquête parlementaire et celui où la loi,
le décret ou l''ordonnance les oblige ou les autoriseà faire connaître ces secrets,
les auront révélés, seront punis d''un emprisonnement d''un an à trois ans et
d''une amende de cent euros à mille euros ou d''une de ces peines seulement.'
- 'Art. 2 : L''allocataire peut demander l''allocation de naissance à partir du
sixième mois de la grossesse et en obtenir le paiement deux mois avant la date
probable de la naissance mentionnée sur le certificat médical à joindre à la demande.L''allocation
de naissance demandée conformément à l''alinéa 1er est due par la caisse d''allocations
familiales, par l''autorité ou par l''établissement public qui serait compétent,
selon le cas, pour payer les allocations familiales à la date à laquelle la demande
de paiement anticipé est introduite.'
- 'Art. 3 : La periode de maternité constitue une période de repos de douze semaines,
ou de treize semainesen cas de naissance multiple, au cours de laquelle la titulaire
ne peut exercer son activité professionnelle habituelle ni aucune autre activité
professionnelle.'
example_title: Secret professionnel
---
# distilcamembert-lleqa
This is a [sentence-transformers](https://www.SBERT.net) model: it maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the [LLeQA](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset for legal information retrieval in **French**.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('maastrichtlawtech/distilcamembert-lleqa')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('maastrichtlawtech/distilcamembert-lleqa')
model = AutoModel.from_pretrained('maastrichtlawtech/distilcamembert-lleqa')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print(sentence_embeddings)
```
## Evaluation
***
We evaluate the model on the test set of LLeQA, which consists of 195 legal questions with a knowlegde corpus of 27.9K candidate articles. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
| MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100 | R@500 |
|---------:|----------:|---------:|-------:|--------:|--------:|
| 36.67 | 37.24 | 29.26 | 52.95 | 78.07 | 90.17 |
## Training
***
#### Background
We utilize the [distilcamembert-base](https://huggingface.co/cmarkea/distilcamembert-base) model and fine-tuned it on 9.3K question-article pairs in French. We used a contrastive learning objective: given a short legal question, the model should predict which out of a set of sampled legal articles, was actually paired with it in the dataset. Formally, we compute the cosine similarity from each possible pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 5.4k steps) using a batch size of 32. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 50 steps, and linear decay of the learning rate. The sequence length was limited to 384 tokens.
#### Data
We use the [Long-form Legal Question Answering (LLeQA)](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset to fine-tune the model. LLeQA is a French native dataset for studying legal information retrieval and question answering. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus.
## Citation
```bibtex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
| [
"QUESTION_ANSWERING"
] | [
"CAS"
] |
mogaio/pr_ebsa_fr_tran_merged25_e5_beginning_offsets | mogaio | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-multilingual-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-15T18:27:03 | 2023-12-15T18:28:05 | 48 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
library_name: setfit
metrics:
- accuracy_score
- classification_report
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'Adil Hussain
Adil Hussain est reconnaissant d''avoir reçu l''enseignement de l''acteur Naseeruddin
Shah à l''époque où il fréquentait l''École nationale d''art dramatique'
- text: 'Bien que leurs opinions sur la question de savoir si les migrants sont un
avantage ou un fardeau soient plus mitigées, de nettes majorités d''électeurs
de toute la ville de New York, de la banlieue et du nord de l''État ont déclaré
que l''État devrait essayer de ralentir l''afflux de migrants, plutôt que d''en
accepter davantage et de s''efforcer d''assimiler les nouveaux arrivants Les démocrates
aspirent à renverser six circonscriptions détenues par les républicains que M.
Biden a remportées en 2020, notamment celle de M Les républicains se sont emparés
de la crise des migrants, donnant un avant-goût des campagnes de l''année prochaine
Les républicains ont surenchéri : Elise Stefanik, la New-Yorkaise qui dirige la
conférence du parti démocrate à la Chambre des représentants,
Suite à la page suivante
a déclaré à Politico la semaine dernière que le parti allait consacrer 100 millions
de dollars aux campagnes dans les circonscriptions de New York Des problèmes à
venir pour les démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge Des problèmes à venir pour les
démocrates de New York en 2024 ?
Les dirigeants démocrates de New York se débattent depuis des mois avec le problème
de l''hébergement des dizaines de milliers de migrants qui ont été transportés
par bus jusqu''à New York et laissés à sa charge.
Mais une autre préoccupation se profile alors que la crise se poursuit sans qu''aucune
issue ne soit en vue : les retombées potentielles pour leur parti lors des élections
de l''année prochaine Les républicains ont tendance à se sentir en sécurité lorsqu''ils
parlent d''immigration - comme les démocrates le font pour l''avortement - et
sont clairement à l''attaque sur la question des migrants à New York, tandis que
les démocrates sont sur la défensive, a déclaré Kyle Kondik, directeur de la communication
pour le Centre de politique de l''Université de Virginie, au réseau USA Today
Plus de 100 000 migrants ont été transportés à New York depuis la frontière sud
depuis le printemps 2022. Environ 60 000 d''entre eux sont hébergés dans la ville,
et plus de 2 100 ont été transportés dans des hôtels situés dans sept comtés au
nord de la ville, de Yonkers à la périphérie de Buffalo, où ils sont logés aux
frais de la ville Les démocrates doivent y remporter des victoires pour gagner
cinq sièges à la Chambre et faire du député Hakeem Jeffries, de Brooklyn, le prochain
président de la Chambre des représentants Les publicités d''attaque des républicains
s''écrivent pratiquement d''elles-mêmes à partir d''un flot de titres et d''images
télévisées, alors que le gouverneur Kathy Hochul, le maire de New York Eric Adams
et le président Joe Biden - tous démocrates - se rejettent mutuellement la faute
et s''échangent des coups de feu pour savoir qui devrait en faire le plus Isaac
Goldberg, un stratège démocrate qui a travaillé sur plusieurs campagnes électorales
à New York, a affirmé qu''il était beaucoup trop tôt pour prédire l''impact politique
de la crise des migrants, soulignant que les élections de 2024 n''auront lieu
que dans 14 mois et que de nombreuses questions tout aussi urgentes pourraient
se poser'
- text: 'LE CANDIDAT A LA PRESIDENCE RAMASWAMY VEUT METTRE FIN AU SYSTEME DE VISA
H-1B AUX ETATS-UNIS
Décrivant le programme de visas H-1B comme une forme de "servitude", Vivek Ramaswamy,
candidat républicain indien-américain à l''élection présidentielle, a promis de
"vider" le système basé sur la loterie et de le remplacer par un système d''admission
méritocratique s''il remporte les élections présidentielles de 2024'
- text: 'Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris Owen Donald Glover
("Queer as Folk") a 54 ans Smith Hal Sparks Catherine Zeta-Jones son-Sampras Chris
Owen Donald Glover
("Queer as Folk") a 54 ans. a 54 ans. Acteur
("Je sais ce que vous avez fait l''été dernier") a 50 ans'
- text: 'Trump profiter de sa célébrité jusqu''à la Maison-Blanche.
"Cela a tué Howard parce qu''il était le roi de tous les médias Il a poursuivi
en disant que Trump ne laisserait pas ses partisans s''approcher de l''une de
ses propriétés. "Les gens qui votent pour Trump, pour la plupart, ne les laisseraient
même pas entrer dans un putain d''hôtel [ "Si être réveillé signifie que je ne
peux pas soutenir Trump, ce que je pense que cela signifie, ou que je soutiens
les personnes qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi
réveillé comme vous le voulez" "Les gens qui votent pour Trump, pour la plupart,
ne les laisseraient même pas entrer dans un putain d''hôtel [...]. Allez à Mar-a-lago,
voyez s''il y a des gens qui vous ressemblent" Stern a également abordé les affirmations
de Trump et de ses partisans selon lesquelles Joe Biden a remporté l''élection
américaine de 2020 grâce à des votes frauduleux "Et soudain, Trump a transformé
Howard, qui était le roi de tous les médias, en prince Harry de tous les médias.
Tout le monde s''en fout "Trump avait l''habitude de participer à l''émission
de Stern chaque semaine. Ils étaient amis. Alors cette idée que Trump est le pire
type qui ait jamais marché sur la surface de la terre, pourquoi traîniez-vous
avec lui ?"
M Mais Stern, qui par le passé a été accusé de racisme et de sexisme dans nombre
de ses sketches à l''antenne, a été un critique virulent de Trump tout au long
de sa présidence et, plus récemment, alors qu''il se prépare à se présenter à
nouveau en 2024.
En 2021, M "Combien de temps allons-nous continuer à élire des gens qui ont perdu
l''élection ?"
Il a poursuivi en qualifiant les partisans de Trump de "nigauds".
"Mon Dieu, j''ai l''impression d''être dans une nation de nigauds. J''espère qu''il
y a encore des gens brillants et dynamiques qui aiment ce pays", a-t-il déclaré
Alors cette idée que Trump est le pire type qui ait jamais marché sur la surface
de la terre, pourquoi traîniez-vous avec lui ?"
M. Failla a déclaré que cela avait "tué" M Si "woke" signifie que je ne peux pas
soutenir Trump, ce que je pense que cela signifie, ou que je soutiens les personnes
qui veulent être transgenres ou que je suis pour le vaccin, appelez-moi "woke"
comme vous voulez Celui qui se décrit comme le "roi de tous les médias" a critiqué
ouvertement l''ancien président américain Donald Trump, les anti-vaxx et, plus
récemment, Lauren Boebert, qu''il a critiquée pour son comportement obscène dans
un théâtre de Denver au début du mois "L''omnipotence médiatique de Donald Trump
a brisé Howard Stern. C''est très important", a déclaré Failla dans la vidéo (selon
OK ! Magazine). "Trump avait l''habitude de participer à l''émission de Stern
chaque semaine L''aversion d''Howard Stern pour Donald Trump, c''est "tout l''ego".
Si "woke" signifie que je ne peux pas soutenir Trump, ce que je pense que cela
signifie, ou que je soutiens les personnes qui veulent être transgenres ou que
je suis pour le vaccin, appelez-moi "woke" comme vous voulez Trump l''année prochaine.
"Je sais que je lui botterai le cul", a-t-il déclaré aux auditeurs.
L''année suivante, Stern a déclaré qu''il envisageait de se lancer dans la course
à la présidence "pour que le pays soit à nouveau juste" En réponse, Trump a partagé
sur sa plateforme Truth Social un clip de Fox News dans lequel l''animateur Jimmy
Failla critique Stern.
"L''omnipotence médiatique de Donald Trump a brisé Howard Stern "Je vais faire
la chose très simple qui remettra le pays sur le droit chemin : un vote, une personne",
a expliqué Stern, affirmant que Trump a en fait perdu l''élection de 2016 contre
Hillary Clinton qui a remporté le vote populaire - mais pas le collège électoral'
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy_score
value: 0.923784494086728
name: Accuracy_Score
- type: classification_report
value:
'0':
precision: 0.9251101321585903
recall: 0.8898305084745762
f1-score: 0.9071274298056154
support: 236
'1':
precision: 0.9081967213114754
recall: 0.920265780730897
f1-score: 0.9141914191419142
support: 301
'2':
precision: 0.9432314410480349
recall: 0.9642857142857143
f1-score: 0.9536423841059601
support: 224
accuracy: 0.923784494086728
macro avg:
precision: 0.9255127648393668
recall: 0.9247940011637291
f1-score: 0.9249870776844965
support: 761
weighted avg:
precision: 0.9237543325873079
recall: 0.923784494086728
f1-score: 0.9236131204146865
support: 761
name: Classification_Report
---
# SetFit with sentence-transformers/paraphrase-multilingual-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| pos | <ul><li>"Les PHL lèvent 1,26 milliard de dollars grâce aux obligations en dollars de détail\nLE GOUVERNEMENT PHILIPPIN a levé 1,26 milliard de dollars lors de la première émission d'obligations de détail en dollars (RDB) sous l'administration Marcos, a déclaré le ministère des Finances (DoF)"</li><li>"Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23 Atom Egoyan revient à Salomé, l'opéra qu'il a monté en 1996, avec Seven Veils\nAtom Egoyan n'a pas été surpris lorsque la Canadian Opera Company lui a demandé de remonter Salomé pour la saison 2022-23. Avec ses éléments de film et de vidéo, son interprétation psychologique et sombre de l'opéra de Richard Strauss avait un solide palmarès de reprises - depuis sa création en 1996, elle avait été présentée deux fois de plus à la COC et avait été reprise par plusieurs autres compagnies"</li><li>'Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public Paul Simon présente un documentaire sur sa carrière\nAprès un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\nTORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public "Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto.\nSimon, âgé de 81 ans, n\'avait pas regardé le film avant la première, et il ne l\'a pas regardé non plus dimanche TORONTO >> Après un documentaire de trois heures et demie sur sa vie, Paul Simon n\'avait que de la sympathie pour le public.\n"Il n\'y a pas de raison que vous soyez épuisés", a dit Simon à la foule après la première du documentaire d\'Alex Gibney "In Restless Dreams : The Music of Paul Simon" d\'Alex Gibney, dimanche au Festival international du film de Toronto'</li></ul> |
| neg | <ul><li>'Le groupe Al-Mostaqilla de l\'université du Koweït a appelé les étudiants à organiser un sit-in à l\'université du Koweït lundi pour protester contre la décision de mettre fin aux classes mixtes La décision a été prise la semaine dernière par le nouveau ministre de l\'éducation, Adel Al-Mane, et le directeur par intérim de l\'université du Koweït, Fayez Al-Dhafiri, et mise en œuvre mercredi, trois jours seulement avant le début de la nouvelle année universitaire à la faculté de droit L\'association a également demandé au gouvernement de "cesser ses interventions politiques et médiatiques injustifiées" dans les affaires de l\'université du Koweït.\nL\'association a appelé le directeur par intérim de l\'université du Koweït à ne pas céder aux pressions politiques et médiatiques et à s\'efforcer de protéger l\'indépendance de l\'université Dhafiri a déclaré que la décision avait été prise en application de la loi de 1996 qui interdisait l\'enseignement mixte à l\'université du Koweït, malgré une décision de la Cour constitutionnelle de 2015 autorisant l\'enseignement mixte lorsqu\'il était nécessaire et dans des cas exceptionnels Parallèlement, l\'association des professeurs de l\'université du Koweït a publié samedi une déclaration demandant aux députés et au gouvernement de "cesser d\'interférer dans les affaires de l\'université du Koweït" et de maintenir l\'indépendance de l\'université "L\'université du Koweït était, est et sera toujours le porte-drapeau de la connaissance et des valeurs, à l\'abri de toute influence extérieure Le député Abdulwahab Al-Essa a reproché à l\'administration de l\'université du Koweït d\'avoir succombé à la pression politique au détriment de l\'intérêt public, ajoutant que l\'université du Koweït avait appliqué correctement une décision de la cour constitutionnelle autorisant les classes mixtes chaque fois que cela était nécessaire'</li><li>"L'immigration étant l'un des défis les plus difficiles à relever pour le président Joe Biden et apparaissant comme un enjeu majeur des élections de l'année prochaine, l'administration délocalise essentiellement la question en s'appuyant sur les pays d'Amérique centrale et d'Amérique du Sud pour empêcher les migrants de se diriger vers le nord"</li><li>'Lors d\'une réunion d\'information mardi, le porte-parole de l\'armée, le lieutenant-colonel Richard Hecht, a suggéré que les Palestiniens tentent de quitter la bande de Gaza par le poste-frontière de Rafah, en Égypte.\nLa perspective d\'un exode des habitants de Gaza vers le territoire égyptien a alarmé les autorités égyptiennes La question qui se pose est de savoir si Israël lancera une offensive terrestre dans la bande de Gaza, une bande de terre de 25 miles de long coincée entre Israël, l\'Égypte et la mer Méditerranée, où vivent 2,3 millions de personnes et qui est gouvernée par le Hamas depuis 2007 Israël pilonne la bande de Gaza ; les habitants se précipitent pour se mettre à l\'abri\nJERUSALEM - Les avions de combat israéliens ont bombardé la bande de Gaza quartier par quartier mardi, réduisant les bâtiments en ruines et poussant les habitants à se précipiter pour se mettre à l\'abri dans ce minuscule territoire isolé, alors qu\'Israël promet des représailles pour l\'attaque surprise du Hamas du week-end qui "se répercuteront Les autorités égyptiennes discutent avec Israël et les États-Unis afin de mettre en place des corridors humanitaires dans la bande de Gaza pour acheminer l\'aide, a déclaré un responsable égyptien. Des négociations sont en cours avec les Israéliens pour que la zone autour du point de passage de Rafah entre l\'Égypte et Gaza soit déclarée "zone d\'interdiction de feu", a déclaré le responsable, sous couvert d\'anonymat car il n\'était pas autorisé à parler aux médias'</li></ul> |
| obj | <ul><li>"L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie Trump, le candidat républicain à la primaire de 2024, pour améliorer l'économie, avec une marge de 47 % à 36 %. L'écart est de 46 %-26 % en faveur de M. Trump parmi les électeurs indépendants Presque tous les républicains interrogés ont exprimé leur pessimisme à l'égard de l'économie, selon le sondage : 96 % d'entre eux estiment que la situation se dégrade au lieu de s'améliorer Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore L'économie pèse sur les Américains Ils sont plus nombreux à faire confiance à Trump qu'à Biden pour alléger leur fardeau\nWASHINGTON - Linda Muñoz a peur de l'économie. Elle a puisé dans son épargne d'urgence cette année. Et elle ne croit pas que le président Joe Biden ressente sa douleur L'épicerie. Le logement. L'essence. Tous ces éléments poussent les gens à s'endetter de plus en plus, disent-ils.\nSelon le sondage, près de 70 % des Américains estiment que la situation économique se dégrade, tandis que 22 % seulement estiment qu'elle s'améliore"</li><li>'Le Pentagone va interroger d\'autres militaires sur l\'attentat suicide de l\'aéroport de Kaboul en 2021\nLe commandement central du Pentagone a ordonné l\'audition d\'une vingtaine de militaires supplémentaires qui se trouvaient à l\'aéroport de Kaboul lorsque des kamikazes ont attaqué pendant le retrait chaotique des forces américaines d\'Afghanistan, alors que les critiques persistent sur le fait que l\'attaque meurtrière aurait pu être stoppée Certaines familles des personnes tuées ou blessées se sont plaintes que le Pentagone n\'avait pas fait preuve de suffisamment de transparence au sujet de l\'attentat à la bombe qui a tué 170 Afghans\net 13 militaires américains.\nL\'enquête du commandement central américain a conclu en novembre 2021 qu\'étant donné la détérioration de la sécurité à la porte de l\'Abbaye de l\'aéroport alors que les Afghans cherchaient de plus en plus à fuir, "l\'attaque n\'aurait pas pu être évitée au niveau tactique sans dégrader la mission visant à maximiser le nombre d\'évacués" Le Pentagone a déclaré que l\'examen de l\'attentat suicide n\'avait révélé aucune identification préalable d\'un attaquant possible ni aucune demande d\'"escalade des règles d\'engagement existantes" régissant l\'utilisation de la force par les troupes américaines'</li><li>'Les retombées de la guerre se répercutent sur les lieux de travail aux États-Unis.\nNEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus "À quoi me sert mon travail si je compromets ma propre morale et mon éthique ?\nL\'un des conflits les plus importants s\'est produit chez Starbucks après que Starbucks Workers United, un syndicat représentant 9 000 travailleurs dans plus de 360 magasins aux États-Unis, a tweeté "Solidarité avec la Palestine" deux jours après l\'attaque du Hamas. Le tweet a été supprimé au bout de 40 minutes, mais l\'entreprise a déclaré qu\'il avait donné lieu à plus de 1 000 plaintes, à des actes de vandalisme et à des affrontements dans ses magasins NEW YORK - Les retombées de la guerre entre Israël et le Hamas se sont répandues sur les lieux de travail partout dans le monde, les dirigeants de grandes entreprises exprimant leur point de vue tandis que les travailleurs se plaignent de ne pas être entendus'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy_Score | Classification_Report |
|:--------|:---------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **all** | 0.9238 | {'0': {'precision': 0.9251101321585903, 'recall': 0.8898305084745762, 'f1-score': 0.9071274298056154, 'support': 236}, '1': {'precision': 0.9081967213114754, 'recall': 0.920265780730897, 'f1-score': 0.9141914191419142, 'support': 301}, '2': {'precision': 0.9432314410480349, 'recall': 0.9642857142857143, 'f1-score': 0.9536423841059601, 'support': 224}, 'accuracy': 0.923784494086728, 'macro avg': {'precision': 0.9255127648393668, 'recall': 0.9247940011637291, 'f1-score': 0.9249870776844965, 'support': 761}, 'weighted avg': {'precision': 0.9237543325873079, 'recall': 0.923784494086728, 'f1-score': 0.9236131204146865, 'support': 761}} |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("mogaio/pr_ebsa_fr_tran_merged25_e5_beginning_offsets")
# Run inference
preds = model("Adil Hussain
Adil Hussain est reconnaissant d'avoir reçu l'enseignement de l'acteur Naseeruddin Shah à l'époque où il fréquentait l'École nationale d'art dramatique")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:-----|
| Word count | 9 | 247.2638 | 2089 |
| Label | Training Sample Count |
|:------|:----------------------|
| neg | 913 |
| obj | 1216 |
| pos | 911 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 1
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0013 | 1 | 0.3703 | - |
| 0.0658 | 50 | 0.3145 | - |
| 0.1316 | 100 | 0.1839 | - |
| 0.1974 | 150 | 0.2558 | - |
| 0.2632 | 200 | 0.2683 | - |
| 0.3289 | 250 | 0.1572 | - |
| 0.3947 | 300 | 0.1953 | - |
| 0.4605 | 350 | 0.171 | - |
| 0.5263 | 400 | 0.2326 | - |
| 0.5921 | 450 | 0.1762 | - |
| 0.6579 | 500 | 0.2818 | - |
| 0.7237 | 550 | 0.2733 | - |
| 0.7895 | 600 | 0.195 | - |
| 0.8553 | 650 | 0.2104 | - |
| 0.9211 | 700 | 0.2124 | - |
| 0.9868 | 750 | 0.0818 | - |
| 1.0526 | 800 | 0.1046 | - |
| 1.1184 | 850 | 0.1633 | - |
| 1.1842 | 900 | 0.3207 | - |
| 1.25 | 950 | 0.2703 | - |
| 1.3158 | 1000 | 0.1934 | - |
| 1.3816 | 1050 | 0.2547 | - |
| 1.4474 | 1100 | 0.0933 | - |
| 1.5132 | 1150 | 0.2102 | - |
| 1.5789 | 1200 | 0.0699 | - |
| 1.6447 | 1250 | 0.1778 | - |
| 1.7105 | 1300 | 0.1796 | - |
| 1.7763 | 1350 | 0.0221 | - |
| 1.8421 | 1400 | 0.2154 | - |
| 1.9079 | 1450 | 0.1683 | - |
| 1.9737 | 1500 | 0.3096 | - |
| 2.0395 | 1550 | 0.201 | - |
| 2.1053 | 1600 | 0.1954 | - |
| 2.1711 | 1650 | 0.2301 | - |
| 2.2368 | 1700 | 0.1141 | - |
| 2.3026 | 1750 | 0.1949 | - |
| 2.3684 | 1800 | 0.164 | - |
| 2.4342 | 1850 | 0.2307 | - |
| 2.5 | 1900 | 0.1912 | - |
| 2.5658 | 1950 | 0.2349 | - |
| 2.6316 | 2000 | 0.0922 | - |
| 2.6974 | 2050 | 0.0702 | - |
| 2.7632 | 2100 | 0.1089 | - |
| 2.8289 | 2150 | 0.1711 | - |
| 2.8947 | 2200 | 0.1432 | - |
| 2.9605 | 2250 | 0.2739 | - |
| 3.0263 | 2300 | 0.1889 | - |
| 3.0921 | 2350 | 0.1036 | - |
| 3.1579 | 2400 | 0.1372 | - |
| 3.2237 | 2450 | 0.028 | - |
| 3.2895 | 2500 | 0.1739 | - |
| 3.3553 | 2550 | 0.142 | - |
| 3.4211 | 2600 | 0.0838 | - |
| 3.4868 | 2650 | 0.0657 | - |
| 3.5526 | 2700 | 0.0054 | - |
| 3.6184 | 2750 | 0.0426 | - |
| 3.6842 | 2800 | 0.1974 | - |
| 3.75 | 2850 | 0.0279 | - |
| 3.8158 | 2900 | 0.1326 | - |
| 3.8816 | 2950 | 0.1614 | - |
| 3.9474 | 3000 | 0.1251 | - |
| 4.0132 | 3050 | 0.1174 | - |
| 4.0789 | 3100 | 0.1948 | - |
| 4.1447 | 3150 | 0.0555 | - |
| 4.2105 | 3200 | 0.0064 | - |
| 4.2763 | 3250 | 0.064 | - |
| 4.3421 | 3300 | 0.0013 | - |
| 4.4079 | 3350 | 0.135 | - |
| 4.4737 | 3400 | 0.0574 | - |
| 4.5395 | 3450 | 0.174 | - |
| 4.6053 | 3500 | 0.2199 | - |
| 4.6711 | 3550 | 0.387 | - |
| 4.7368 | 3600 | 0.114 | - |
| 4.8026 | 3650 | 0.0853 | - |
| 4.8684 | 3700 | 0.0325 | - |
| 4.9342 | 3750 | 0.019 | - |
| 5.0 | 3800 | 0.0572 | - |
| 0.0013 | 1 | 0.1435 | - |
| 0.0658 | 50 | 0.0969 | - |
| 0.1316 | 100 | 0.1085 | - |
| 0.1974 | 150 | 0.0271 | - |
| 0.2632 | 200 | 0.0138 | - |
| 0.3289 | 250 | 0.058 | - |
| 0.3947 | 300 | 0.1205 | - |
| 0.4605 | 350 | 0.0788 | - |
| 0.5263 | 400 | 0.1449 | - |
| 0.5921 | 450 | 0.0383 | - |
| 0.6579 | 500 | 0.0338 | - |
| 0.7237 | 550 | 0.1253 | - |
| 0.7895 | 600 | 0.069 | - |
| 0.8553 | 650 | 0.104 | - |
| 0.9211 | 700 | 0.0462 | - |
| 0.9868 | 750 | 0.1975 | - |
| 1.0526 | 800 | 0.0241 | - |
| 1.1184 | 850 | 0.0426 | - |
| 1.1842 | 900 | 0.0519 | - |
| 1.25 | 950 | 0.0815 | - |
| 1.3158 | 1000 | 0.1839 | - |
| 1.3816 | 1050 | 0.0198 | - |
| 1.4474 | 1100 | 0.0128 | - |
| 1.5132 | 1150 | 0.1645 | - |
| 1.5789 | 1200 | 0.0019 | - |
| 1.6447 | 1250 | 0.0557 | - |
| 1.7105 | 1300 | 0.0098 | - |
| 1.7763 | 1350 | 0.001 | - |
| 1.8421 | 1400 | 0.1557 | - |
| 1.9079 | 1450 | 0.1286 | - |
| 1.9737 | 1500 | 0.094 | - |
| 2.0395 | 1550 | 0.0059 | - |
| 2.1053 | 1600 | 0.0227 | - |
| 2.1711 | 1650 | 0.0899 | - |
| 2.2368 | 1700 | 0.0053 | - |
| 2.3026 | 1750 | 0.0021 | - |
| 2.3684 | 1800 | 0.0114 | - |
| 2.4342 | 1850 | 0.1163 | - |
| 2.5 | 1900 | 0.0959 | - |
| 2.5658 | 1950 | 0.0252 | - |
| 2.6316 | 2000 | 0.0921 | - |
| 2.6974 | 2050 | 0.1159 | - |
| 2.7632 | 2100 | 0.0026 | - |
| 2.8289 | 2150 | 0.1211 | - |
| 2.8947 | 2200 | 0.1843 | - |
| 2.9605 | 2250 | 0.0014 | - |
| 3.0263 | 2300 | 0.0085 | - |
| 3.0921 | 2350 | 0.0839 | - |
| 3.1579 | 2400 | 0.2372 | - |
| 3.2237 | 2450 | 0.0213 | - |
| 3.2895 | 2500 | 0.0155 | - |
| 3.3553 | 2550 | 0.1128 | - |
| 3.4211 | 2600 | 0.0945 | - |
| 3.4868 | 2650 | 0.0917 | - |
| 3.5526 | 2700 | 0.0011 | - |
| 3.6184 | 2750 | 0.0024 | - |
| 3.6842 | 2800 | 0.0044 | - |
| 3.75 | 2850 | 0.121 | - |
| 3.8158 | 2900 | 0.0056 | - |
| 3.8816 | 2950 | 0.003 | - |
| 3.9474 | 3000 | 0.0899 | - |
| 4.0132 | 3050 | 0.0157 | - |
| 4.0789 | 3100 | 0.1188 | - |
| 4.1447 | 3150 | 0.001 | - |
| 4.2105 | 3200 | 0.0222 | - |
| 4.2763 | 3250 | 0.1209 | - |
| 4.3421 | 3300 | 0.1085 | - |
| 4.4079 | 3350 | 0.0054 | - |
| 4.4737 | 3400 | 0.0009 | - |
| 4.5395 | 3450 | 0.0015 | - |
| 4.6053 | 3500 | 0.003 | - |
| 4.6711 | 3550 | 0.0009 | - |
| 4.7368 | 3600 | 0.0003 | - |
| 4.8026 | 3650 | 0.0009 | - |
| 4.8684 | 3700 | 0.03 | - |
| 4.9342 | 3750 | 0.1206 | - |
| 5.0 | 3800 | 0.0003 | - |
| 0.0013 | 1 | 0.2045 | - |
| 0.0658 | 50 | 0.0078 | - |
| 0.1316 | 100 | 0.0087 | - |
| 0.1974 | 150 | 0.0386 | - |
| 0.2632 | 200 | 0.1015 | - |
| 0.3289 | 250 | 0.0022 | - |
| 0.3947 | 300 | 0.0291 | - |
| 0.4605 | 350 | 0.0013 | - |
| 0.5263 | 400 | 0.0022 | - |
| 0.5921 | 450 | 0.1324 | - |
| 0.6579 | 500 | 0.113 | - |
| 0.7237 | 550 | 0.0011 | - |
| 0.7895 | 600 | 0.1723 | - |
| 0.8553 | 650 | 0.0049 | - |
| 0.9211 | 700 | 0.206 | - |
| 0.9868 | 750 | 0.1683 | - |
| 1.0526 | 800 | 0.0954 | - |
| 1.1184 | 850 | 0.018 | - |
| 1.1842 | 900 | 0.1854 | - |
| 1.25 | 950 | 0.0342 | - |
| 1.3158 | 1000 | 0.0015 | - |
| 1.3816 | 1050 | 0.0062 | - |
| 1.4474 | 1100 | 0.1187 | - |
| 1.5132 | 1150 | 0.0048 | - |
| 1.5789 | 1200 | 0.0011 | - |
| 1.6447 | 1250 | 0.002 | - |
| 1.7105 | 1300 | 0.092 | - |
| 1.7763 | 1350 | 0.1245 | - |
| 1.8421 | 1400 | 0.0009 | - |
| 1.9079 | 1450 | 0.1185 | - |
| 1.9737 | 1500 | 0.0017 | - |
| 2.0395 | 1550 | 0.008 | - |
| 2.1053 | 1600 | 0.0049 | - |
| 2.1711 | 1650 | 0.0083 | - |
| 2.2368 | 1700 | 0.0026 | - |
| 2.3026 | 1750 | 0.0081 | - |
| 2.3684 | 1800 | 0.0036 | - |
| 2.4342 | 1850 | 0.0016 | - |
| 2.5 | 1900 | 0.0017 | - |
| 2.5658 | 1950 | 0.0014 | - |
| 2.6316 | 2000 | 0.0017 | - |
| 2.6974 | 2050 | 0.002 | - |
| 2.7632 | 2100 | 0.1022 | - |
| 2.8289 | 2150 | 0.0004 | - |
| 2.8947 | 2200 | 0.0007 | - |
| 2.9605 | 2250 | 0.0794 | - |
| 3.0263 | 2300 | 0.0183 | - |
| 3.0921 | 2350 | 0.0377 | - |
| 3.1579 | 2400 | 0.029 | - |
| 3.2237 | 2450 | 0.0003 | - |
| 3.2895 | 2500 | 0.0961 | - |
| 3.3553 | 2550 | 0.0008 | - |
| 3.4211 | 2600 | 0.0873 | - |
| 3.4868 | 2650 | 0.0501 | - |
| 3.5526 | 2700 | 0.0029 | - |
| 3.6184 | 2750 | 0.0008 | - |
| 3.6842 | 2800 | 0.0004 | - |
| 3.75 | 2850 | 0.0011 | - |
| 3.8158 | 2900 | 0.0518 | - |
| 3.8816 | 2950 | 0.0002 | - |
| 3.9474 | 3000 | 0.1115 | - |
| 4.0132 | 3050 | 0.0129 | - |
| 4.0789 | 3100 | 0.0005 | - |
| 4.1447 | 3150 | 0.0012 | - |
| 4.2105 | 3200 | 0.1086 | - |
| 4.2763 | 3250 | 0.0199 | - |
| 4.3421 | 3300 | 0.0004 | - |
| 4.4079 | 3350 | 0.0001 | - |
| 4.4737 | 3400 | 0.0832 | - |
| 4.5395 | 3450 | 0.0003 | - |
| 4.6053 | 3500 | 0.0041 | - |
| 4.6711 | 3550 | 0.1146 | - |
| 4.7368 | 3600 | 0.0027 | - |
| 4.8026 | 3650 | 0.0002 | - |
| 4.8684 | 3700 | 0.0544 | - |
| 4.9342 | 3750 | 0.0002 | - |
| 5.0 | 3800 | 0.0046 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.15.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"CAS"
] |
rjnClarke/BAAI-bge-m3-fine-tuned | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10359",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T10:27:46 | 2024-08-06T10:29:10 | 48 | 0 | ---
base_model: BAAI/bge-m3
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@3
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@200
- cosine_map@100
- dot_accuracy@3
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@200
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10359
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Cleopatra reacts to the news of Antony's death with a mixture of
sadness and resignation, contemplating her own mortality and the fickle nature
of life.
sentences:
- "Immortal longings in me. Now no more The juice of Egypt's grape shall moist\
\ this lip. Yare, yare, good Iras; quick. Methinks I hear Antony call. I\
\ see him rouse himself To praise my noble act. I hear him mock The luck\
\ of Caesar, which the gods give men To excuse their after wrath. Husband,\
\ I come. Now to that name my courage prove my title! I am fire and air;\
\ my other elements I give to baser life. So, have you done? Come then,\
\ and take the last warmth of my lips. Farewell, kind Charmian. Iras, long\
\ farewell. [Kisses them. IRAS falls and dies] \
\ Have I the aspic in my lips? Dost fall? If thus thou and nature can so gently\
\ part, The stroke of death is as a lover's pinch, Which hurts and is desir'd.\
\ Dost thou lie still? If thou vanishest, thou tell'st the world It is\
\ not worth leave-taking. CHARMIAN. Dissolve, thick cloud, and rain, that I may\
\ say The gods themselves do weep. CLEOPATRA. This proves me base.\n \
\ If she first meet the curled Antony,\n"
- "BURGUNDY. Warlike and martial Talbot, Burgundy\n Enshrines thee in his heart,\
\ and there erects Thy noble deeds as valour's monuments. TALBOT. Thanks,\
\ gentle Duke. But where is Pucelle now? I think her old familiar is asleep.\
\ Now where's the Bastard's braves, and Charles his gleeks? What, all amort?\
\ Rouen hangs her head for grief That such a valiant company are fled. Now\
\ will we take some order in the town, Placing therein some expert officers;\
\ And then depart to Paris to the King, For there young Henry with his nobles\
\ lie. BURGUNDY. What Lord Talbot pleaseth Burgundy. TALBOT. But yet, before\
\ we go, let's not forget The noble Duke of Bedford, late deceas'd, But\
\ see his exequies fulfill'd in Rouen. A braver soldier never couched lance,\
\ A gentler heart did never sway in court; But kings and mightiest potentates\
\ must die, For that's the end of human misery. Exeunt\n"
- "Your suffering in this dearth, you may as well\n Strike at the heaven with\
\ your staves as lift them Against the Roman state; whose course will on \
\ The way it takes, cracking ten thousand curbs Of more strong link asunder\
\ than can ever Appear in your impediment. For the dearth, The gods, not\
\ the patricians, make it, and Your knees to them, not arms, must help. Alack,\
\ You are transported by calamity Thither where more attends you; and you\
\ slander The helms o' th' state, who care for you like fathers, When you\
\ curse them as enemies. FIRST CITIZEN. Care for us! True, indeed! They ne'er\
\ car'd for us yet. Suffer us to famish, and their storehouses cramm'd with\
\ grain; make edicts for usury, to support usurers; repeal daily any wholesome\
\ act established against the rich, and provide more piercing statutes daily\
\ to chain up and restrain the poor. If the wars eat us not up, they will;\
\ and there's all the love they bear us. MENENIUS. Either you must Confess\
\ yourselves wondrous malicious, Or be accus'd of folly. I shall tell you \
\ A pretty tale. It may be you have heard it; But, since it serves my purpose,\
\ I will venture To stale't a little more. FIRST CITIZEN. Well, I'll hear\
\ it, sir; yet you must not think to fob off our disgrace with a tale. But,\
\ an't please you, deliver. MENENIUS. There was a time when all the body's members\
\ Rebell'd against the belly; thus accus'd it: That only like a gulf it\
\ did remain I' th' midst o' th' body, idle and unactive, Still cupboarding\
\ the viand, never bearing Like labour with the rest; where th' other instruments\
\ Did see and hear, devise, instruct, walk, feel,\n And, mutually participate,\
\ did minister\n"
- source_sentence: How does the excerpt reflect themes of loyalty and sacrifice in
the play?
sentences:
- "me a thousand marks in links and torches, walking with thee in\n the night\
\ betwixt tavern and tavern; but the sack that thou hast drunk me would have\
\ bought me lights as good cheap at the dearest chandler's in Europe. I have\
\ maintained that salamander of yours with fire any time this two-and-thirty\
\ years. God reward me for it! Bard. 'Sblood, I would my face were in your\
\ belly! Fal. God-a-mercy! so should I be sure to be heart-burn'd.\n \
\ Enter Hostess. How now, Dame Partlet the hen? Have you enquir'd\
\ yet who pick'd\n my pocket? Host. Why, Sir John, what do you think, Sir\
\ John? Do you think I keep thieves in my house? I have search'd, I have enquired,\
\ so has my husband, man by man, boy by boy, servant by servant. The tithe\
\ of a hair was never lost in my house before. Fal. Ye lie, hostess. Bardolph\
\ was shav'd and lost many a hair, and I'll be sworn my pocket was pick'd.\
\ Go to, you are a woman, go! Host. Who, I? No; I defy thee! God's light, I was\
\ never call'd so in mine own house before! Fal. Go to, I know you well enough.\
\ Host. No, Sir John; you do not know me, Sir John. I know you, Sir John.\
\ You owe me money, Sir John, and now you pick a quarrel to beguile me of\
\ it. I bought you a dozen of shirts to your back. Fal. Dowlas, filthy dowlas!\
\ I have given them away to bakers' wives; they have made bolters of them.\
\ Host. Now, as I am a true woman, holland of eight shillings an ell. You\
\ owe money here besides, Sir John, for your diet and by-drinkings, and money\
\ lent you, four-and-twenty pound. Fal. He had his part of it; let him pay. \
\ Host. He? Alas, he is poor; he hath nothing. Fal. How? Poor? Look upon his\
\ face. What call you rich? Let them coin his nose, let them coin his cheeks.\
\ I'll not pay a denier.\n What, will you make a younker of me? Shall I not\
\ take mine ease\n"
- "EDWARD. I wonder how our princely father scap'd,\n Or whether he be scap'd\
\ away or no From Clifford's and Northumberland's pursuit. Had he been ta'en,\
\ we should have heard the news; Had he been slain, we should have heard the\
\ news; Or had he scap'd, methinks we should have heard The happy tidings\
\ of his good escape. How fares my brother? Why is he so sad? RICHARD. I cannot\
\ joy until I be resolv'd Where our right valiant father is become. I saw\
\ him in the battle range about, And watch'd him how he singled Clifford forth.\
\ Methought he bore him in the thickest troop As doth a lion in a herd of\
\ neat;\n Or as a bear, encompass'd round with dogs,\n Who having pinch'd\
\ a few and made them cry, The rest stand all aloof and bark at him. So\
\ far'd our father with his enemies; So fled his enemies my warlike father.\
\ Methinks 'tis prize enough to be his son. See how the morning opes her\
\ golden gates And takes her farewell of the glorious sun. How well resembles\
\ it the prime of youth, Trimm'd like a younker prancing to his love! EDWARD.\
\ Dazzle mine eyes, or do I see three suns? RICHARD. Three glorious suns, each\
\ one a perfect sun; Not separated with the racking clouds, But sever'd\
\ in a pale clear-shining sky. See, see! they join, embrace, and seem to kiss,\
\ As if they vow'd some league inviolable. Now are they but one lamp, one\
\ light, one sun. In this the heaven figures some event. EDWARD. 'Tis wondrous\
\ strange, the like yet never heard of. I think it cites us, brother, to the\
\ field, That we, the sons of brave Plantagenet, Each one already blazing\
\ by our meeds, Should notwithstanding join our lights together And overshine\
\ the earth, as this the world. Whate'er it bodes, henceforward will I bear\
\ Upon my target three fair shining suns. RICHARD. Nay, bear three daughters-\
\ by your leave I speak it, You love the breeder better than the male.\n"
- "Forget that rarest treasure of your cheek,\n Exposing it- but, O, the harder\
\ heart! Alack, no remedy!- to the greedy touch Of common-kissing Titan,\
\ and forget Your laboursome and dainty trims wherein You made great Juno\
\ angry. IMOGEN. Nay, be brief; I see into thy end, and am almost A man\
\ already. PISANIO. First, make yourself but like one. Fore-thinking this,\
\ I have already fit- 'Tis in my cloak-bag- doublet, hat, hose, all That\
\ answer to them. Would you, in their serving, And with what imitation you\
\ can borrow From youth of such a season, fore noble Lucius Present yourself,\
\ desire his service, tell him Wherein you're happy- which will make him know\
\ If that his head have ear in music; doubtless With joy he will embrace\
\ you; for he's honourable, And, doubling that, most holy. Your means abroad-\
\ You have me, rich; and I will never fail Beginning nor supplyment. IMOGEN.\
\ Thou art all the comfort The gods will diet me with. Prithee away! There's\
\ more to be consider'd; but we'll even All that good time will give us. This\
\ attempt I am soldier to, and will abide it with A prince's courage. Away,\
\ I prithee. PISANIO. Well, madam, we must take a short farewell, Lest, being\
\ miss'd, I be suspected of Your carriage from the court. My noble mistress,\
\ Here is a box; I had it from the Queen. What's in't is precious. If you\
\ are sick at sea Or stomach-qualm'd at land, a dram of this\n Will drive\
\ away distemper. To some shade,\n And fit you to your manhood. May the gods\
\ Direct you to the best! IMOGEN. Amen. I thank thee. Exeunt\
\ severally\n"
- source_sentence: The excerpt showcases the emotional turmoil and sense of honor
that drives Brutus to take his own life in the face of defeat.
sentences:
- "Thou know'st that we two went to school together;\n Even for that our love\
\ of old, I prithee, Hold thou my sword-hilts, whilst I run on it. VOLUMNIUS.\
\ That's not an office for a friend, my lord. \
\ Alarum still. CLITUS. Fly, fly, my lord, there is no tarrying\
\ here. BRUTUS. Farewell to you, and you, and you, Volumnius. Strato, thou\
\ hast been all this while asleep; Farewell to thee too, Strato. Countrymen,\
\ My heart doth joy that yet in all my life I found no man but he was true\
\ to me. I shall have glory by this losing day, More than Octavius and Mark\
\ Antony By this vile conquest shall attain unto. So, fare you well at once,\
\ for Brutus' tongue Hath almost ended his life's history. Night hangs upon\
\ mine eyes, my bones would rest That have but labor'd to attain this hour.\
\ Alarum. Cry within, \"Fly, fly, fly!\" CLITUS. Fly,\
\ my lord, fly. BRUTUS. Hence! I will follow. Exeunt Clitus,\
\ Dardanius, and Volumnius. I prithee, Strato, stay thou by thy lord. Thou\
\ art a fellow of a good respect; Thy life hath had some smatch of honor in\
\ it. Hold then my sword, and turn away thy face, While I do run upon it.\
\ Wilt thou, Strato? STRATO. Give me your hand first. Fare you well, my lord.\
\ BRUTUS. Farewell, good Strato. Runs on his sword. Caesar,\
\ now be still; I kill'd not thee with half so good a will. Dies.\n\
\ Alarum. Retreat. Enter Octavius, Antony, Messala,\n Lucilius,\
\ and the Army.\n OCTAVIUS. What man is that?\n"
- "Elsinore. A room in the Castle.\nEnter King, Queen, Polonius, Ophelia, Rosencrantz,\
\ Guildenstern, and Lords. King. And can you by no drift of circumstance\n \
\ Get from him why he puts on this confusion, Grating so harshly all his days\
\ of quiet With turbulent and dangerous lunacy? Ros. He does confess he feels\
\ himself distracted, But from what cause he will by no means speak. Guil.\
\ Nor do we find him forward to be sounded, But with a crafty madness keeps\
\ aloof When we would bring him on to some confession Of his true state.\
\ Queen. Did he receive you well? Ros. Most like a gentleman. Guil. But with\
\ much forcing of his disposition. Ros. Niggard of question, but of our demands\
\ Most free in his reply. Queen. Did you assay him To any pastime? Ros.\
\ Madam, it so fell out that certain players\n We o'erraught on the way.\
\ Of these we told him,\n"
- "VII.\nThe French camp near Agincourt\nEnter the CONSTABLE OF FRANCE, the LORD\
\ RAMBURES, the DUKE OF ORLEANS,\nthe DAUPHIN, with others\n CONSTABLE. Tut!\
\ I have the best armour of the world.\n Would it were day! ORLEANS. You have\
\ an excellent armour; but let my horse have his due. CONSTABLE. It is the\
\ best horse of Europe. ORLEANS. Will it never be morning? DAUPHIN. My Lord\
\ of Orleans and my Lord High Constable, you talk of horse and armour? ORLEANS.\
\ You are as well provided of both as any prince in the world. DAUPHIN. What\
\ a long night is this! I will not change my horse with any that treads but\
\ on four pasterns. Ca, ha! he bounds from the earth as if his entrails were\
\ hairs; le cheval volant, the Pegasus, chez les narines de feu! When I bestride\
\ him I soar, I am a hawk. He trots the air; the earth sings when he touches\
\ it; the basest horn of his hoof is more musical than the pipe of Hermes.\
\ ORLEANS. He's of the colour of the nutmeg. DAUPHIN. And of the heat of the\
\ ginger. It is a beast for Perseus: he is pure air and fire; and the dull\
\ elements of earth and water never appear in him, but only in patient stillness\
\ while his rider mounts him; he is indeed a horse, and all other jades you\
\ may call beasts. CONSTABLE. Indeed, my lord, it is a most absolute and excellent\
\ horse.\n DAUPHIN. It is the prince of palfreys; his neigh is like the\n"
- source_sentence: What themes are present in the excerpt from the play?
sentences:
- "Enter TRAVERS NORTHUMBERLAND. Here comes my servant Travers, whom I sent\n \
\ On Tuesday last to listen after news. LORD BARDOLPH. My lord, I over-rode\
\ him on the way; And he is furnish'd with no certainties More than he haply\
\ may retail from me. NORTHUMBERLAND. Now, Travers, what good tidings comes with\
\ you? TRAVERS. My lord, Sir John Umfrevile turn'd me back With joyful tidings;\
\ and, being better hors'd, Out-rode me. After him came spurring hard A\
\ gentleman, almost forspent with speed, That stopp'd by me to breathe his\
\ bloodied horse. He ask'd the way to Chester; and of him I did demand what\
\ news from Shrewsbury. He told me that rebellion had bad luck, And that\
\ young Harry Percy's spur was cold. With that he gave his able horse the\
\ head And, bending forward, struck his armed heels\n Against the panting\
\ sides of his poor jade\n Up to the rowel-head; and starting so, He seem'd\
\ in running to devour the way, Staying no longer question. NORTHUMBERLAND.\
\ Ha! Again: Said he young Harry Percy's spur was cold? Of Hotspur, Coldspur?\
\ that rebellion Had met ill luck? LORD BARDOLPH. My lord, I'll tell you what:\
\ If my young lord your son have not the day, Upon mine honour, for a silken\
\ point I'll give my barony. Never talk of it. NORTHUMBERLAND. Why should\
\ that gentleman that rode by Travers Give then such instances of loss? LORD\
\ BARDOLPH. Who- he? He was some hilding fellow that had stol'n The horse\
\ he rode on and, upon my life, Spoke at a venture. Look, here comes more news.\
\ \n Enter Morton NORTHUMBERLAND. Yea, this man's brow,\
\ like to a title-leaf,\n"
- "ANTONY. Yet they are not join'd. Where yond pine does stand\n I shall discover\
\ all. I'll bring thee word Straight how 'tis like to go. \
\ Exit SCARUS. Swallows have built In Cleopatra's sails their nests.\
\ The augurers Say they know not, they cannot tell; look grimly, And dare\
\ not speak their knowledge. Antony Is valiant and dejected; and by starts\
\ His fretted fortunes give him hope and fear Of what he has and has not.\
\ [Alarum afar off, as at a sea-fight]\n \
\ Re-enter ANTONY ANTONY. All is lost!\n This foul Egyptian hath\
\ betrayed me. My fleet hath yielded to the foe, and yonder They cast\
\ their caps up and carouse together Like friends long lost. Triple-turn'd\
\ whore! 'tis thou\n Hast sold me to this novice; and my heart\n Makes\
\ only wars on thee. Bid them all fly; For when I am reveng'd upon my charm,\
\ I have done all. Bid them all fly; begone. Exit SCARUS O sun, thy\
\ uprise shall I see no more! Fortune and Antony part here; even here Do\
\ we shake hands. All come to this? The hearts That spaniel'd me at heels,\
\ to whom I gave Their wishes, do discandy, melt their sweets On blossoming\
\ Caesar; and this pine is bark'd That overtopp'd them all. Betray'd I am.\
\ O this false soul of Egypt! this grave charm- Whose eye beck'd forth my\
\ wars and call'd them home, Whose bosom was my crownet, my chief end- Like\
\ a right gypsy hath at fast and loose Beguil'd me to the very heart of loss.\
\ What, Eros, Eros! Enter CLEOPATRA\n Ah, thou spell!\
\ Avaunt!\n"
- "TALBOT. Saint George and victory! Fight, soldiers, fight.\n The Regent hath\
\ with Talbot broke his word And left us to the rage of France his sword. \
\ Where is John Talbot? Pause and take thy breath; I gave thee life and rescu'd\
\ thee from death. JOHN. O, twice my father, twice am I thy son! The life\
\ thou gav'st me first was lost and done Till with thy warlike sword, despite\
\ of fate, To my determin'd time thou gav'st new date. TALBOT. When from the\
\ Dauphin's crest thy sword struck fire, It warm'd thy father's heart with\
\ proud desire Of bold-fac'd victory. Then leaden age, Quicken'd with youthful\
\ spleen and warlike rage, Beat down Alencon, Orleans, Burgundy, And from\
\ the pride of Gallia rescued thee. The ireful bastard Orleans, that drew blood\
\ From thee, my boy, and had the maidenhood Of thy first fight, I soon encountered\
\ And, interchanging blows, I quickly shed Some of his bastard blood; and\
\ in disgrace\n Bespoke him thus: 'Contaminated, base,\n"
- source_sentence: What is the significance of the tennis balls in the excerpt from
the play?
sentences:
- "My fault is past. But, O, what form of prayer\n Can serve my turn? 'Forgive\
\ me my foul murther'? That cannot be; since I am still possess'd Of those\
\ effects for which I did the murther- My crown, mine own ambition, and my\
\ queen. May one be pardon'd and retain th' offence? In the corrupted currents\
\ of this world Offence's gilded hand may shove by justice, And oft 'tis\
\ seen the wicked prize itself Buys out the law; but 'tis not so above. \
\ There is no shuffling; there the action lies In his true nature, and we ourselves\
\ compell'd, Even to the teeth and forehead of our faults, To give in evidence.\
\ What then? What rests? Try what repentance can. What can it not? Yet what\
\ can it when one cannot repent? O wretched state! O bosom black as death!\
\ O limed soul, that, struggling to be free, Art more engag'd! Help, angels!\
\ Make assay. Bow, stubborn knees; and heart with strings of steel, Be\
\ soft as sinews of the new-born babe! All may be well. \
\ He kneels.\n Enter Hamlet. Ham. Now might\
\ I do it pat, now he is praying;\n And now I'll do't. And so he goes to heaven,\
\ And so am I reveng'd. That would be scann'd. A villain kills my father;\
\ and for that, I, his sole son, do this same villain send To heaven. \
\ Why, this is hire and salary, not revenge! He took my father grossly, full\
\ of bread, With all his crimes broad blown, as flush as May; And how his\
\ audit stands, who knows save heaven?\n But in our circumstance and course\
\ of thought,\n"
- "YORK. From Ireland thus comes York to claim his right\n And pluck the crown\
\ from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright,\
\ To entertain great England's lawful king. Ah, sancta majestas! who would\
\ not buy thee dear? Let them obey that knows not how to rule; This hand\
\ was made to handle nought but gold. I cannot give due action to my words\
\ Except a sword or sceptre balance it.\n A sceptre shall it have, have\
\ I a soul\n On which I'll toss the flower-de-luce of France.\n \
\ Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb\
\ me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York,\
\ if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept\
\ thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger\
\ from Henry, our dread liege, To know the reason of these arms in peace; \
\ Or why thou, being a subject as I am, Against thy oath and true allegiance\
\ sworn, Should raise so great a power without his leave, Or dare to bring\
\ thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is\
\ so great. O, I could hew up rocks and fight with flint, I am so angry\
\ at these abject terms; And now, like Ajax Telamonius, On sheep or oxen\
\ could I spend my fury. I am far better born than is the King, More like\
\ a king, more kingly in my thoughts; But I must make fair weather yet awhile,\
\ Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon\
\ me That I have given no answer all this while; My mind was troubled with\
\ deep melancholy. The cause why I have brought this army hither Is to\
\ remove proud Somerset from the King, Seditious to his Grace and to the state.\
\ BUCKINGHAM. That is too much presumption on thy part; But if thy arms be\
\ to no other end, The King hath yielded unto thy demand:\n The Duke of\
\ Somerset is in the Tower.\n"
- "Says that you savour too much of your youth,\n And bids you be advis'd there's\
\ nought in France That can be with a nimble galliard won; You cannot revel\
\ into dukedoms there. He therefore sends you, meeter for your spirit, This\
\ tun of treasure; and, in lieu of this, Desires you let the dukedoms that\
\ you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What\
\ treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the\
\ Dauphin is so pleasant with us; His present and your pains we thank you for.\
\ When we have match'd our rackets to these balls, We will in France,\
\ by God's grace, play a set Shall strike his father's crown into the hazard.\
\ Tell him he hath made a match with such a wrangler That all the courts\
\ of France will be disturb'd With chaces. And we understand him well, How\
\ he comes o'er us with our wilder days, Not measuring what use we made of\
\ them. We never valu'd this poor seat of England; And therefore, living\
\ hence, did give ourself To barbarous licence; as 'tis ever common That\
\ men are merriest when they are from home. But tell the Dauphin I will keep\
\ my state, Be like a king, and show my sail of greatness, When I do rouse\
\ me in my throne of France; For that I have laid by my majesty And plodded\
\ like a man for working-days; But I will rise there with so full a glory \
\ That I will dazzle all the eyes of France, Yea, strike the Dauphin blind\
\ to look on us. And tell the pleasant Prince this mock of his Hath turn'd\
\ his balls to gun-stones, and his soul Shall stand sore charged for the wasteful\
\ vengeance\n That shall fly with them; for many a thousand widows\n"
model-index:
- name: RAG_general/rerank/models/BAAI-bge-m3-ft
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: m3 dev
type: m3-dev
metrics:
- type: cosine_accuracy@3
value: 0.5356211989574283
name: Cosine Accuracy@3
- type: cosine_precision@1
value: 0.4209383145091225
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.17854039965247612
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11416159860990441
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06185925282363162
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.4209383145091225
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5356211989574283
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5708079930495221
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6185925282363163
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.518363473579454
name: Cosine Ndcg@10
- type: cosine_mrr@200
value: 0.4915925316966444
name: Cosine Mrr@200
- type: cosine_map@100
value: 0.49136031845002553
name: Cosine Map@100
- type: dot_accuracy@3
value: 0.5356211989574283
name: Dot Accuracy@3
- type: dot_precision@1
value: 0.4209383145091225
name: Dot Precision@1
- type: dot_precision@3
value: 0.17854039965247612
name: Dot Precision@3
- type: dot_precision@5
value: 0.11416159860990441
name: Dot Precision@5
- type: dot_precision@10
value: 0.06185925282363162
name: Dot Precision@10
- type: dot_recall@1
value: 0.4209383145091225
name: Dot Recall@1
- type: dot_recall@3
value: 0.5356211989574283
name: Dot Recall@3
- type: dot_recall@5
value: 0.5708079930495221
name: Dot Recall@5
- type: dot_recall@10
value: 0.6185925282363163
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.518363473579454
name: Dot Ndcg@10
- type: dot_mrr@200
value: 0.4915925316966444
name: Dot Mrr@200
- type: dot_map@100
value: 0.49136031845002553
name: Dot Map@100
---
# RAG_general/rerank/models/BAAI-bge-m3-ft
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) <!-- at revision 5617a9f61b028005a4858fdac845db406aefb181 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/BAAI-bge-m3-fine-tuned")
# Run inference
sentences = [
'What is the significance of the tennis balls in the excerpt from the play?',
"Says that you savour too much of your youth,\n And bids you be advis'd there's nought in France That can be with a nimble galliard won; You cannot revel into dukedoms there. He therefore sends you, meeter for your spirit, This tun of treasure; and, in lieu of this, Desires you let the dukedoms that you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the Dauphin is so pleasant with us; His present and your pains we thank you for. When we have match'd our rackets to these balls, We will in France, by God's grace, play a set Shall strike his father's crown into the hazard. Tell him he hath made a match with such a wrangler That all the courts of France will be disturb'd With chaces. And we understand him well, How he comes o'er us with our wilder days, Not measuring what use we made of them. We never valu'd this poor seat of England; And therefore, living hence, did give ourself To barbarous licence; as 'tis ever common That men are merriest when they are from home. But tell the Dauphin I will keep my state, Be like a king, and show my sail of greatness, When I do rouse me in my throne of France; For that I have laid by my majesty And plodded like a man for working-days; But I will rise there with so full a glory That I will dazzle all the eyes of France, Yea, strike the Dauphin blind to look on us. And tell the pleasant Prince this mock of his Hath turn'd his balls to gun-stones, and his soul Shall stand sore charged for the wasteful vengeance\n That shall fly with them; for many a thousand widows\n",
"YORK. From Ireland thus comes York to claim his right\n And pluck the crown from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright, To entertain great England's lawful king. Ah, sancta majestas! who would not buy thee dear? Let them obey that knows not how to rule; This hand was made to handle nought but gold. I cannot give due action to my words Except a sword or sceptre balance it.\n A sceptre shall it have, have I a soul\n On which I'll toss the flower-de-luce of France.\n Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York, if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger from Henry, our dread liege, To know the reason of these arms in peace; Or why thou, being a subject as I am, Against thy oath and true allegiance sworn, Should raise so great a power without his leave, Or dare to bring thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is so great. O, I could hew up rocks and fight with flint, I am so angry at these abject terms; And now, like Ajax Telamonius, On sheep or oxen could I spend my fury. I am far better born than is the King, More like a king, more kingly in my thoughts; But I must make fair weather yet awhile, Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon me That I have given no answer all this while; My mind was troubled with deep melancholy. The cause why I have brought this army hither Is to remove proud Somerset from the King, Seditious to his Grace and to the state. BUCKINGHAM. That is too much presumption on thy part; But if thy arms be to no other end, The King hath yielded unto thy demand:\n The Duke of Somerset is in the Tower.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `m3-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@3 | 0.5356 |
| cosine_precision@1 | 0.4209 |
| cosine_precision@3 | 0.1785 |
| cosine_precision@5 | 0.1142 |
| cosine_precision@10 | 0.0619 |
| cosine_recall@1 | 0.4209 |
| cosine_recall@3 | 0.5356 |
| cosine_recall@5 | 0.5708 |
| cosine_recall@10 | 0.6186 |
| cosine_ndcg@10 | 0.5184 |
| cosine_mrr@200 | 0.4916 |
| **cosine_map@100** | **0.4914** |
| dot_accuracy@3 | 0.5356 |
| dot_precision@1 | 0.4209 |
| dot_precision@3 | 0.1785 |
| dot_precision@5 | 0.1142 |
| dot_precision@10 | 0.0619 |
| dot_recall@1 | 0.4209 |
| dot_recall@3 | 0.5356 |
| dot_recall@5 | 0.5708 |
| dot_recall@10 | 0.6186 |
| dot_ndcg@10 | 0.5184 |
| dot_mrr@200 | 0.4916 |
| dot_map@100 | 0.4914 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,359 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 25.61 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 38 tokens</li><li>mean: 394.49 tokens</li><li>max: 577 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is the general being described in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>What is the main conflict highlighted in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>The excerpt showcases the tension between Antony's loyalty to Cleopatra and his obligations to Caesar, as well as Cleopatra's influence over him.</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,302 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 11 tokens</li><li>mean: 25.55 tokens</li><li>max: 77 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 400.31 tokens</li><li>max: 610 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The excerpt highlights the tension between Antony's loyalty to Cleopatra and his standing in Rome, showcasing the intricate balance of power and love in the play.</code> | <code>When shrill-tongu'd Fulvia scolds. The messengers!<br> ANTONY. Let Rome in Tiber melt, and the wide arch Of the rang'd empire fall! Here is my space. Kingdoms are clay; our dungy earth alike Feeds beast as man. The nobleness of life Is to do thus [emhracing], when such a mutual pair And such a twain can do't, in which I bind, On pain of punishment, the world to weet We stand up peerless. CLEOPATRA. Excellent falsehood! Why did he marry Fulvia, and not love her? I'll seem the fool I am not. Antony Will be himself. ANTONY. But stirr'd by Cleopatra. Now for the love of Love and her soft hours, Let's not confound the time with conference harsh; There's not a minute of our lives should stretch Without some pleasure now. What sport to-night? CLEOPATRA. Hear the ambassadors. ANTONY. Fie, wrangling queen! Whom everything becomes- to chide, to laugh, To weep; whose every passion fully strives To make itself in thee fair and admir'd. No messenger but thine, and all alone To-night we'll wander through the streets and note The qualities of people. Come, my queen; Last night you did desire it. Speak not to us. Exeunt ANTONY and CLEOPATRA, with the train DEMETRIUS. Is Caesar with Antonius priz'd so slight? PHILO. Sir, sometimes when he is not Antony, He comes too short of that great property Which still should go with Antony. DEMETRIUS. I am full sorry That he approves the common liar, who Thus speaks of him at Rome; but I will hope<br> Of better deeds to-morrow. Rest you happy! Exeunt<br></code> |
| <code>What is the significance of the soothsayer in the context of the play?</code> | <code>CHARMIAN. Lord Alexas, sweet Alexas, most anything Alexas, almost<br> most absolute Alexas, where's the soothsayer that you prais'd so to th' Queen? O that I knew this husband, which you say must charge his horns with garlands! ALEXAS. Soothsayer! SOOTHSAYER. Your will? CHARMIAN. Is this the man? Is't you, sir, that know things? SOOTHSAYER. In nature's infinite book of secrecy A little I can read. ALEXAS. Show him your hand.<br> Enter ENOBARBUS ENOBARBUS. Bring in the banquet quickly; wine enough<br> Cleopatra's health to drink. CHARMIAN. Good, sir, give me good fortune. SOOTHSAYER. I make not, but foresee. CHARMIAN. Pray, then, foresee me one. SOOTHSAYER. You shall be yet far fairer than you are. CHARMIAN. He means in flesh. IRAS. No, you shall paint when you are old. CHARMIAN. Wrinkles forbid! ALEXAS. Vex not his prescience; be attentive. CHARMIAN. Hush!<br> SOOTHSAYER. You shall be more beloving than beloved.<br></code> |
| <code>What is the setting of the scene in which the excerpt takes place?</code> | <code>sweet Isis, I beseech thee! And let her die too, and give him a<br> worse! And let worse follow worse, till the worst of all follow him laughing to his grave, fiftyfold a cuckold! Good Isis, hear me this prayer, though thou deny me a matter of more weight; good Isis, I beseech thee! IRAS. Amen. Dear goddess, hear that prayer of the people! For, as it is a heartbreaking to see a handsome man loose-wiv'd, so it is a deadly sorrow to behold a foul knave uncuckolded. Therefore, dear Isis, keep decorum, and fortune him accordingly! CHARMIAN. Amen. ALEXAS. Lo now, if it lay in their hands to make me a cuckold, they would make themselves whores but they'ld do't!<br> Enter CLEOPATRA ENOBARBUS. Hush! Here comes Antony.<br> CHARMIAN. Not he; the Queen. CLEOPATRA. Saw you my lord? ENOBARBUS. No, lady. CLEOPATRA. Was he not here? CHARMIAN. No, madam. CLEOPATRA. He was dispos'd to mirth; but on the sudden A Roman thought hath struck him. Enobarbus! ENOBARBUS. Madam? CLEOPATRA. Seek him, and bring him hither. Where's Alexas? ALEXAS. Here, at your service. My lord approaches.<br> Enter ANTONY, with a MESSENGER and attendants CLEOPATRA. We will not look upon him. Go with us.<br> Exeunt CLEOPATRA, ENOBARBUS, and the rest MESSENGER. Fulvia thy wife first came into the field. ANTONY. Against my brother Lucius? MESSENGER. Ay.<br> But soon that war had end, and the time's state<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `gradient_accumulation_steps`: 2
- `learning_rate`: 1e-05
- `weight_decay`: 5e-05
- `warmup_steps`: 50
- `fp16`: True
- `half_precision_backend`: True
- `load_best_model_at_end`: True
- `fp16_backend`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 5e-05
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: True
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: True
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | m3-dev_cosine_map@100 |
|:----------:|:--------:|:-------------:|:----------:|:---------------------:|
| 0.7722 | 500 | 1.1966 | - | - |
| 1.0008 | 648 | - | 0.8832 | 0.4814 |
| 1.5436 | 1000 | 0.8492 | - | - |
| 2.0008 | 1296 | - | 0.8582 | 0.4855 |
| 2.3151 | 1500 | 0.6805 | - | - |
| **2.9961** | **1941** | **-** | **0.8607** | **0.4914** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-3e-250samples-20iter | udrearobert999 | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"base_model:finetune:sentence-transformers/multi-qa-mpnet-base-cos-v1",
"model-index",
"region:us"
] | 2024-05-07T17:26:22 | 2024-05-07T17:27:00 | 47 | 0 | ---
base_model: sentence-transformers/multi-qa-mpnet-base-cos-v1
library_name: setfit
metrics:
- f1
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: in durankulak near varna is another important example other signs of early
metals are found from the third millennium bc in palmela portugal los millares
spain and stonehenge united kingdom the precise beginnings however have not be
clearly ascertained and new discoveries are both continuous and ongoing in tamilnadu
in approximately 1900 bc ancient iron smelting sites were functioning in tamil
nadu in the near east about 3500 bc it was discovered that by combining copper
and tin a superior metal could be made an alloy called bronze this represented
a major technological shift known as the bronze age the extraction of iron from
its ore into a workable metal is much more difficult than for copper or tin the
process appears to have been invented by the hittites in about 1200 bc beginning
the iron age the secret of extracting and working iron was a key factor in the
success of the philistineshistorical developments in ferrous metallurgy can be
found in a wide variety of past cultures and civilizations this includes the ancient
and medieval kingdoms and empires of the middle east and near east ancient iran
ancient egypt ancient nubia and anatolia in presentday turkey ancient nok carthage
the greeks and romans of ancient europe medieval europe ancient and medieval china
ancient and medieval india ancient and medieval japan amongst others many applications
practices and devices associated or involved in metallurgy were established in
ancient china such as the innovation of the blast furnace cast iron hydraulicpowered
trip hammers and double acting piston bellowsa 16th century book by georg agricola
de re metallica describes the highly developed and complex processes of mining
metal ores metal extraction and metallurgy of the time agricola has been described
as the father of metallurgy extractive metallurgy is the practice of removing
valuable metals from an ore and refining the extracted raw metals into a purer
form in order to convert a metal oxide or sulphide to a purer metal the ore must
be reduced physically chemically or electrolytically extractive metallurgists
are interested in three primary streams feed concentrate metal oxidesulphide and
tailings waste after mining large pieces of the ore feed are broken through crushing
or grinding in order to obtain particles small enough where each particle is either
mostly valuable or mostly waste concentrating the particles of value in a form
supporting separation enables the desired metal to be removed from waste products
mining may not be necessary if the ore body and physical environment are conducive
to leaching leaching dissolves minerals in an ore body and results in an enriched
solution the solution is collected and processed to extract valuable metals ore
- text: '##rch procedure that evaluates the objective function p x displaystyle pmathbf
x on a grid of candidate source locations g displaystyle mathcal g to estimate
the spatial location of the sound source x s displaystyle textbf xs as the point
of the grid that provides the maximum srp modifications of the classical srpphat
algorithm have been proposed to reduce the computational cost of the gridsearch
step of the algorithm and to increase the robustness of the method in the classical
srpphat for each microphone pair and for each point of the grid a unique integer
tdoa value is selected to be the acoustic delay corresponding to that grid point
this procedure does not guarantee that all tdoas are associated to points on the
grid nor that the spatial grid is consistent since some of the points may not
correspond to an intersection of hyperboloids this issue becomes more problematic
with coarse grids since when the number of points is reduced part of the tdoa
information gets lost because most delays are not anymore associated to any point
in the grid the modified srpphat collects and uses the tdoa information related
to the volume surrounding each spatial point of the search grid by considering
a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x
and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation
limits of gcc delays which depend on the spatial location x displaystyle mathbf
x the accumulation limits can be calculated beforehand in an exact way by exploring
the boundaries separating the regions corresponding to the points of the grid
alternatively they can be selected by considering the spatial gradient of the
tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle
nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau
m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright
of the gradient is for a rectangular grid where neighboring points are separated
a distance r displaystyle r the lower and upper accumulation limits are given
by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min
leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert'
- text: authority to select projects and mandated new metropolitan planning initiatives
for the first time state transportation officials were required to consult seriously
with local representatives on mpo governing boards regarding matters of project
prioritization and decisionmaking these changes had their roots in the need to
address increasingly difficult transportation problems — in particular the more
complicated patterns of traffic congestion that arose with the suburban development
boom in the previous decades many recognized that the problems could only be addressed
effectively through a stronger federal commitment to regional planning the legislation
that emerged the intermodal surface transportation efficiency act istea was signed
into federal law by president george h w bush in december 1991 it focused on improving
transportation not as an end in itself but as the means to achieve important national
goals including economic progress cleaner air energy conservation and social equity
istea promoted a transportation system in which different modes and facilities
— highway transit pedestrian bicycle aviation and marine — were integrated to
allow a seamless movement of both goods and people new funding programs provided
greater flexibility in the use of funds particularly regarding using previously
restricted highway funds for transit development improved intermodal connections
and emphasized upgrades to existing facilities over building new capacity — particularly
roadway capacity to accomplish more serious metropolitan planning istea doubled
federal funding for mpo operations and required the agencies to evaluate a variety
of multimodal solutions to roadway congestion and other transportation problems
mpos also were required to broaden public participation in the planning process
and to see that investment decisions contributed to meeting the air quality standards
of the clean air act amendments in addition istea placed a new requirement on
mpos to conduct fiscally constrained planning and ensure that longrange transportation
plans and shortterm transportation improvement programs were fiscally constrained
in other words adopted plans and programs can not include more projects than reasonably
can be expected to be funded through existing or projected sources of revenues
this new requirement represented a major conceptual shift for many mpos and others
in the planning community since the imposition of fiscal discipline on plans now
required not only understanding how much money might be available but how to prioritize
investment needs and make difficult choices among competing needs adding to this
complexity is the need to plan across transportation modes and develop approaches
for multimodal investment prioritization and decision making it is in this context
of greater prominence funding and requirements that mpos function today an annual
element is composed of transportation improvement projects contained in an areas
transportation improvement program tip which is proposed for implementation during
the current year the annual element is submitted to the us department of transportation
as part of the required planning process the passage of safe accountable flexible
efficient transportation equity act a legacy for users safetealu
- text: '##pignygiroux served as an assistant professor from 1997 2003 associate professor
from 2003 2014 chair of the department of geography from 2015 2018 and professor
beginning in 2014 with secondary appointments in department of geology the college
of education social services and rubenstein school of environment natural resources
she teaches courses in meteorology climatology physical geography remote sensing
and landsurface processes in her work as state climatologist for vermont dupignygiroux
uses her expertise hydrology and extreme weather such as floods droughts and storms
to keep the residents of vermont informed on how climate change will affect their
homes health and livelihoods she assists other state agencies in preparing for
and adapting to current and future impacts of climate change on vermonts transportation
system emergency management planning and agriculture and forestry industries for
example she has published analyses of the impacts of climate change on the health
of vermonts sugar maples a hardwood species of key economic and cultural importance
to the state as cochair of vermonts state ’ s drought task force she played a
key role in developing the 2018 vermont state hazard mitigation plandupignygiroux
served as secretary for the american association of state climatologists from
20102011 and president elect from 20192020 in june 2020 she was elected as president
of the american association of state climatologists which is a twoyear term in
addition to her research on climate change dupignygiroux is known for her efforts
to research and promote climate literacy climate literacy is an understanding
of the influences of and influences on the climate system including how people
change the climate how climate metrics are observed and modelled and how climate
change affects society “ being climate literate is more critical than ever before
” lesleyann dupignygiroux stated for a 2020 article on climate literacy “ if we
do not understand weather climate and climate change as intricate and interconnected
systems then our appreciation of the big picture is lost ” dupignygiroux is known
for her climate literacy work with elementary and high school teachers and students
she cofounded the satellites weather and climate swac project in 2008 which is
a professional development program for k12 teachers designed to promote climate
literacy and interest in the stem science technology engineering and mathematics
careers dupignygiroux is also a founding member of the climate literacy and energy
awareness network clean formerly climate literacy network a communitybased effort
to support climate literacy and communication in a 2016 interview dupignygiroux
stated “ sharing knowledge and giving back to my community are my two axioms in
life watching students mature and flourish in'
- text: no solutions to x n y n z n displaystyle xnynzn for all n ≥ 3 displaystyle
ngeq 3 this claim appears in his annotations in the margins of his copy of diophantus
euler the interest of leonhard euler 1707 – 1783 in number theory was first spurred
in 1729 when a friend of his the amateur goldbach pointed him towards some of
fermats work on the subject this has been called the rebirth of modern number
theory after fermats relative lack of success in getting his contemporaries attention
for the subject eulers work on number theory includes the following proofs for
fermats statements this includes fermats little theorem generalised by euler to
nonprime moduli the fact that p x 2 y 2 displaystyle px2y2 if and only if p ≡
1 mod 4 displaystyle pequiv 1bmod 4 initial work towards a proof that every integer
is the sum of four squares the first complete proof is by josephlouis lagrange
1770 soon improved by euler himself the lack of nonzero integer solutions to x
4 y 4 z 2 displaystyle x4y4z2 implying the case n4 of fermats last theorem the
case n3 of which euler also proved by a related method pells equation first misnamed
by euler he wrote on the link between continued fractions and pells equation first
steps towards analytic number theory in his work of sums of four squares partitions
pentagonal numbers and the distribution of prime numbers euler pioneered the use
of what can be seen as analysis in particular infinite series in number theory
since he lived before the development of complex analysis most of his work is
restricted to the formal manipulation of power series he did however do some very
notable though not fully rigorous early work on what would later be called the
riemann zeta function quadratic forms following fermats lead euler did further
research on the question of which primes can be expressed in the form x 2 n y
2 displaystyle x2ny2 some of it prefiguring quadratic reciprocity diophantine
equations euler worked on some diophantine equations of genus 0 and 1 in particular
he studied diophantuss work he tried to systematise it but the time was not yet
ripe for such an endeavour — algebraic geometry was still in its infancy he did
notice there was a connection between diophantine problems and elliptic integrals
whose study he had himself initiated lagrange legendre and gauss josephlouis
inference: true
model-index:
- name: SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: f1
value: 0.7540954329342108
name: F1
---
# SetFit with sentence-transformers/multi-qa-mpnet-base-cos-v1
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1) as the Sentence Transformer embedding model. A [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/multi-qa-mpnet-base-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-cos-v1)
- **Classification head:** a [SetFitHead](huggingface.co/docs/setfit/reference/main#setfit.SetFitHead) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 43 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 20 | <ul><li>'physical and cosmological worlds'</li><li>'the migration period also known as the barbarian invasions was a period in european history marked by largescale migrations that saw the fall of the western roman empire and subsequent settlement of its former territories by various tribes and the establishment of the postroman kingdomsthe term refers to the important role played by the migration invasion and settlement of various tribes notably the franks goths alemanni alans huns early slavs pannonian avars bulgars and magyars within or into the territories of the roman empire and europe as a whole the period is traditionally taken to have begun in ad 375 possibly as early as 300 and ended in 568 various factors contributed to this phenomenon of migration and invasion and their role and significance are still widely discussed historians differ as to the dates for the beginning and ending of the migration period the beginning of the period is widely regarded as the invasion of europe by the huns from asia in about 375 and the ending with the conquest of italy by the lombards in 568 but a more loosely set period is from as early as 300 to as late as 800 for example in the 4th century a very large group of goths was settled as foederati within the roman balkans and the franks were settled south of the rhine in roman gaul in 406 a particularly large and unexpected crossing of the rhine was made by a group of vandals alans and suebi as central power broke down in the western roman empire the military became more important but was dominated by men of barbarian origin there are contradictory opinions as to whether the fall of the western roman empire was a result of an increase in migrations or both the breakdown of central power and the increased importance of nonromans resulted in internal roman factors migrations and the use of nonromans in the military were known in the periods before and after and the eastern roman empire adapted and continued to exist until the fall of constantinople to the ottomans in 1453 the fall of the western roman empire although it involved the establishment of competing barbarian kingdoms was to some extent managed by the eastern emperors the migrants comprised war bands or tribes of 10000 to 20000 people immigration was common throughout the time of the roman empire but over the course of 100 years the migrants numbered not more than 750000 in total compared to an average 40 million population of the roman empire at that time the first migrations of peoples were made by germanic tribes such as the goths including the visigoths and the ostrogoths the vandals the anglosaxons the lombards the suebi the frisii the'</li><li>'the criterion of embarrassment is a type of historical analysis in which a historical account is deemed likely to be true under the inference that the author would have no reason to invent a historical account which might embarrass them certain biblical scholars have used this as a metric for assessing whether the new testaments accounts of jesus actions and words are historically probablethe criterion of embarrassment is one of the criteria of authenticity used by academics the others being the criterion of dissimilarity the criterion of language and environment criterion of coherence and the criterion of multiple attestation the criterion of embarrassment is a longstanding tool of new testament research the phrase was used by john p meier in his 1991 book a marginal jew he attributed it to edward schillebeeckx 1914 – 2009 who does not appear to have actually used the term in his written works the earliest use of the approach was possibly by paul wilhelm schmiedel in the encyclopaedia biblica 1899 the assumption of the criterion of embarrassment is that the early church would hardly have gone out of its way to create or falsify historical material that embarrassed its author or weakened its position in arguments with opponents rather embarrassing material coming from jesus would be either suppressed or softened in later stages of the gospel tradition this criterion is rarely used by itself and is typically one of a number of criteria such as the criterion of dissimilarity and the criterion of multiple attestation along with the historical method the crucifixion of jesus is an example of an event that meets the criterion of embarrassment this method of execution was considered the most shameful and degrading in the roman world and advocates of the criterion claim this method of execution is therefore the least likely to have been invented by the followers of jesus the criterion of embarrassment has its limitations and is almost always used in concert with the other criteria one limitation to the criterion of embarrassment is that clearcut cases of such embarrassment are few clearly context is important as what might be considered as embarrassing in one era and social context may not have been so in another embarrassing details may be included as an alternative to an even more embarrassing account of the same event as a hypothetical example saint peters denial of jesus could have been a substitution for an even greater misdeed of peteran example of the second point is found in the stories of the infancy gospels in one account from the infancy gospel of thomas a very young jesus is said to have used his supernatural powers first to strike dead and then revive a playmate who had accidentally bumped into him if this tradition'</li></ul> |
| 16 | <ul><li>'the badlands guardian is a geomorphological feature located near medicine hat in the southeast corner of alberta canada the feature was discovered in 2005 by lynn hickox through use of google earth viewed from the air the feature has been said to resemble a human head wearing a full indigenous type of headdress facing directly westward additional humanmade structures have been said to resemble a pair of earphones worn by the figure the apparent earphones are a road township road 123a and an oil well which were installed in the early 2000s and are expected to disappear once the project is abandonedthe head is a drainage feature created through erosion of soft clayrich soil by the action of wind and water the arid badlands are typified by infrequent but intense rainshowers sparse vegetation and soft sediments the head may have been created during a short period of fast erosion immediately following intense rainfall although the image appears to be a convex feature it is actually concave – that is a valley which is formed by erosion on a stratum of clay and is an instance of the hollowface illusion its age is estimated to be in the hundreds of years at a minimumin 2006 suitable names were canvassed by cbc radio one program as it happens out of 50 names submitted seven were suggested to the cypress county council they altered the suggested guardian of the badlands to become badlands guardianthe badlands guardian was also described by the sydney morning herald as a net sensation pcworld magazine has referred to the formation as a geological marvel it is listed as the seventh of the top ten google earth finds by time magazine apophenia the tendency to perceive connections between unrelated things pareidolia the phenomenon of perceiving faces in random patterns face on mars photographed by viking 1 in 1976 inuksuk traditional native arctic peoples stone marker statuaries in alaska and arctic canada marcahuasi a plateau in the andes near lima peru with numerous rock formations with surprising likenesses to specific animals people and religious symbols old man of the mountain former rock profile in new hampshire collapsed on may 3 2003 old man of hoy a rock pillar off scotland that resembles a standing man'</li><li>'to keep the ground cool both in areas with frostsusceptible soil permafrost may necessitate special enclosures for buried utilities called utilidors globally permafrost warmed by about 03 °c 054 °f between 2007 and 2016 with stronger warming observed in the continuous permafrost zone relative to the discontinuous zone observed warming was up to 3 °c 54 °f in parts of northern alaska early 1980s to mid2000s and up to 2 °c 36 °f in parts of the russian european north 1970 – 2020 this warming inevitably causes permafrost to thaw active layer thickness has increased in the european and russian arctic across the 21st century and at high elevation areas in europe and asia since the 1990s 1237 between 2000 and 2018 the average active layer thickness had increased from 127 centimetres 417 ft to 145 centimetres 476 ft at an average annual rate of 065 centimetres 026 in in yukon the zone of continuous permafrost might have moved 100 kilometres 62 mi poleward since 1899 but accurate records only go back 30 years the extent of subsea permafrost is decreasing as well as of 2019 97 of permafrost under arctic ice shelves is becoming warmer and thinner 1281 based on high agreement across model projections fundamental process understanding and paleoclimate evidence it is virtually certain that permafrost extent and volume will continue to shrink as the global climate warms with the extent of the losses determined by the magnitude of warming 1283 permafrost thaw is associated with a wide range of issues and international permafrost association ipa exists to help address them it convenes international permafrost conferences and maintains global terrestrial network for permafrost which undertakes special projects such as preparing databases maps bibliographies and glossaries and coordinates international field programmes and networks as recent warming deepens the active layer subject to permafrost thaw this exposes formerly stored carbon to biogenic processes which facilitate its entrance into the atmosphere as carbon dioxide and methane because carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw it is a wellknown example of a positive climate change feedback and because widespread permafrost thaw is effectively irreversible it is also considered one of tipping points in the climate systemin the northern circumpolar region permafrost contains organic matter equivalent to 1400 – 1650 billion tons of pure carbon which was built up over thousands of years this amount equals almost half of all organic material in all soils'</li><li>'1 ρ c c ρ c b 1 ρ m displaystyle h1cb1rho ccrho cb1rho m b 1 ρ m − ρ c h 1 ρ c displaystyle b1rho mrho ch1rho c b 1 h 1 ρ c ρ m − ρ c displaystyle b1frac h1rho crho mrho c where ρ m displaystyle rho m is the density of the mantle ca 3300 kg m−3 and ρ c displaystyle rho c is the density of the crust ca 2750 kg m−3 thus generally b1 [UNK] 5⋅h1in the case of negative topography a marine basin the balancing of lithospheric columns gives c ρ c h 2 ρ w b 2 ρ m c − h 2 − b 2 ρ c displaystyle crho ch2rho wb2rho mch2b2rho c b 2 ρ m − ρ c h 2 ρ c − ρ w displaystyle b2rho mrho ch2rho crho w b 2 ρ c − ρ w ρ m − ρ c h 2 displaystyle b2frac rho crho wrho mrho ch2 where ρ m displaystyle rho m is the density of the mantle ca 3300 kg m−3 ρ c displaystyle rho c is the density of the crust ca 2750 kg m−3 and ρ w displaystyle rho w is the density of the water ca 1000 kg m−3 thus generally b2 [UNK] 32⋅h2 for the simplified model shown the new density is given by ρ 1 ρ c c h 1 c displaystyle rho 1rho cfrac ch1c where h 1 displaystyle h1 is the height of the mountain and c the thickness of the crust this hypothesis was suggested to explain how large topographic loads such as seamounts eg hawaiian islands could be compensated by regional rather than local displacement of the lithosphere this is the more general solution for lithospheric flexure as it approaches the locally compensated models above as the load becomes much larger than a flexural wavelength or the flexural rigidity of the lithosphere approaches zerofor example the vertical displacement z of a region of ocean crust would be described by the differential equation d d 4 z d x 4 ρ m − ρ w z g p x displaystyle dfrac d4zdx4rho mrho wzgpx where ρ m displaystyle rho m and ρ w displaystyle rho w are'</li></ul> |
| 0 | <ul><li>'of harmonics enjoys some of the valuable properties of the classical fourier transform in terms of carrying convolutions to pointwise products or otherwise showing a certain understanding of the underlying group structure see also noncommutative harmonic analysis if the group is neither abelian nor compact no general satisfactory theory is currently known satisfactory means at least as strong as the plancherel theorem however many specific cases have been analyzed for example sln in this case representations in infinite dimensions play a crucial role study of the eigenvalues and eigenvectors of the laplacian on domains manifolds and to a lesser extent graphs is also considered a branch of harmonic analysis see eg hearing the shape of a drum harmonic analysis on euclidean spaces deals with properties of the fourier transform on rn that have no analog on general groups for example the fact that the fourier transform is rotationinvariant decomposing the fourier transform into its radial and spherical components leads to topics such as bessel functions and spherical harmonics harmonic analysis on tube domains is concerned with generalizing properties of hardy spaces to higher dimensions many applications of harmonic analysis in science and engineering begin with the idea or hypothesis that a phenomenon or signal is composed of a sum of individual oscillatory components ocean tides and vibrating strings are common and simple examples the theoretical approach often tries to describe the system by a differential equation or system of equations to predict the essential features including the amplitude frequency and phases of the oscillatory components the specific equations depend on the field but theories generally try to select equations that represent significant principles that are applicable the experimental approach is usually to acquire data that accurately quantifies the phenomenon for example in a study of tides the experimentalist would acquire samples of water depth as a function of time at closely enough spaced intervals to see each oscillation and over a long enough duration that multiple oscillatory periods are likely included in a study on vibrating strings it is common for the experimentalist to acquire a sound waveform sampled at a rate at least twice that of the highest frequency expected and for a duration many times the period of the lowest frequency expected for example the top signal at the right is a sound waveform of a bass guitar playing an open string corresponding to an a note with a fundamental frequency of 55 hz the waveform appears oscillatory but it is more complex than a simple sine wave indicating the presence of additional waves the different wave components contributing to the sound can be revealed by applying a mathematical analysis technique known as the fourier transform shown in the lower figure there is a prominent peak at'</li><li>'this results in decibel units on the logarithmic scale the logarithmic scale accommodates the vast range of sound heard by the human ear frequency or pitch is measured in hertz hz and reflects the number of sound waves propagated through the air per second the range of frequencies heard by the human ear range from 20 hz to 20000 hz however sensitivity to hearing higher frequencies decreases with age some organisms such as elephants can register frequencies between 0 and 20 hz infrasound and others such as bats can recognize frequencies above 20000 hz ultrasound to echolocateresearchers use different weights to account for noise frequency with intensity as humans do not perceive sound at the same loudness level the most commonly used weighted levels are aweighting cweighting and zweighting aweighting mirrors the range of hearing with frequencies of 20 hz to 20000 hz this gives more weight to higher frequencies and less weight to lower frequencies cweighting has been used to measure peak sound pressure or impulse noise similar to loud shortlived noises from machinery in occupational settings zweighting also known as zeroweighting represents noise levels without any frequency weightsunderstanding sound pressure levels is key to assessing measurements of noise pollution several metrics describing noise exposure include energy average equivalent level of the aweighted sound laeq this measures the average sound energy over a given period for constant or continuous noise such as road traffic laeq can be further broken up into different types of noise based on time of day however cutoffs for evening and nighttime hours may differ between countries with the united states belgium and new zealand noting evening hours from 19002200 or 700pm – 1000pm and nighttime hours from 2200700 or 1000pm – 700am and most european countries noting evening hours from 19002300 or 700pm – 1100pm and nighttime hours from 2300700 or 1100pm – 700am laeq terms include daynight average level dnl or ldn this measurement assesses the cumulative exposure to sound for a 24hour period leq over 24 hrs of the year with a 10 dba penalty or weight added to nighttime noise measurements given the increased sensitivity to noise at night this is calculated from the following equation united states belgium new zealand l d n 10 ⋅ log 10 1 24 15 ⋅ 10 l d a y 10 9 ⋅ 10 l n i g h t 10 10 displaystyle ldn10cdot log 10frac 124left15cdot 10frac lday109cdot 10frac lnight1010'</li><li>'and 2 new in the standard iec 61672 is a minimum 60 db linear span requirement and zfrequencyweighting with a general tightening of limit tolerances as well as the inclusion of maximum allowable measurement uncertainties for each described periodic test the periodic testing part of the standard iec616723 also requires that manufacturers provide the testing laboratory with correction factors to allow laboratory electrical and acoustic testing to better mimic free field acoustics responses each correction used should be provided with uncertainties that need to be accounted for in the testing laboratory final measurement uncertainty budget this makes it unlikely that a sound level meter designed to the older 60651 and 60804 standards will meet the requirements of iec 61672 2013 these withdrawn standards should no longer be used especially for any official purchasing requirements as they have significantly poorer accuracy requirements than iec 61672 combatants in every branch of the united states military are at risk for auditory impairments from steady state or impulse noises while applying double hearing protection helps prevent auditory damage it may compromise effectiveness by isolating the user from his or her environment with hearing protection on a soldier is less likely to be aware of his or her movements alerting the enemy to their presence hearing protection devices hpd could also require higher volume levels for communication negating their purpose milstd 1474d the first military standard milstd on sound was published in 1984 and underwent revision in 1997 to become milstd1474d this standard establishes acoustical noise limits and prescribes testing requirements and measurement techniques for determining conformance to the noise limits specified herein this standard applies to the acquisition and product improvement of all designed or purchased nondevelopmental items systems subsystems equipment and facilities that emit acoustic noise this standard is intended to address noise levels emitted during the full range of typical operational conditions milstd 1474e in 2015 milstd 1474d evolved to become milstd1474e which as of 2018 remains to be the guidelines for united states military defense weaponry development and usage in this standard the department of defense established guidelines for steady state noise impulse noise aural nondetectability aircraft and aerial systems and shipboard noise unless marked with warning signage steady state and impulse noises are not to exceed 85 decibels aweighted dba and if wearing protection 140 decibels dbp respectively it establishes acoustical noise limits and prescribes testing requirements and measurement techniques for determining conformance to the noise limits specified herein this standard applies to the acquisition and product improvement of all designed or purchased'</li></ul> |
| 1 | <ul><li>'in fluid dynamics a karman vortex street or a von karman vortex street is a repeating pattern of swirling vortices caused by a process known as vortex shedding which is responsible for the unsteady separation of flow of a fluid around blunt bodiesit is named after the engineer and fluid dynamicist theodore von karman and is responsible for such phenomena as the singing of suspended telephone or power lines and the vibration of a car antenna at certain speeds mathematical modeling of von karman vortex street can be performed using different techniques including but not limited to solving the full navierstokes equations with kepsilon sst komega and reynolds stress and large eddy simulation les turbulence models by numerically solving some dynamic equations such as the ginzburg – landau equation or by use of a bicomplex variable a vortex street forms only at a certain range of flow velocities specified by a range of reynolds numbers re typically above a limiting re value of about 90 the global reynolds number for a flow is a measure of the ratio of inertial to viscous forces in the flow of a fluid around a body or in a channel and may be defined as a nondimensional parameter of the global speed of the whole fluid flow where u displaystyle u the free stream flow speed ie the flow speed far from the fluid boundaries u ∞ displaystyle uinfty like the body speed relative to the fluid at rest or an inviscid flow speed computed through the bernoulli equation which is the original global flow parameter ie the target to be nondimensionalised l displaystyle l a characteristic length parameter of the body or channel ν 0 displaystyle nu 0 the free stream kinematic viscosity parameter of the fluid which in turn is the ratio between ρ 0 displaystyle rho 0 the reference fluid density μ 0 displaystyle mu 0 the free stream fluid dynamic viscosityfor common flows the ones which can usually be considered as incompressible or isothermal the kinematic viscosity is everywhere uniform over all the flow field and constant in time so there is no choice on the viscosity parameter which becomes naturally the kinematic viscosity of the fluid being considered at the temperature being considered on the other hand the reference length is always an arbitrary parameter so particular attention should be put when comparing flows around different obstacles or in channels of different shapes the global reynolds numbers should be referred to the same reference length this is actually the reason for which the most precise sources for airfoil and channel flow data specify the reference length'</li><li>'compressible flow or gas dynamics is the branch of fluid mechanics that deals with flows having significant changes in fluid density while all flows are compressible flows are usually treated as being incompressible when the mach number the ratio of the speed of the flow to the speed of sound is smaller than 03 since the density change due to velocity is about 5 in that case the study of compressible flow is relevant to highspeed aircraft jet engines rocket motors highspeed entry into a planetary atmosphere gas pipelines commercial applications such as abrasive blasting and many other fields the study of gas dynamics is often associated with the flight of modern highspeed aircraft and atmospheric reentry of spaceexploration vehicles however its origins lie with simpler machines at the beginning of the 19th century investigation into the behaviour of fired bullets led to improvement in the accuracy and capabilities of guns and artillery as the century progressed inventors such as gustaf de laval advanced the field while researchers such as ernst mach sought to understand the physical phenomena involved through experimentation at the beginning of the 20th century the focus of gas dynamics research shifted to what would eventually become the aerospace industry ludwig prandtl and his students proposed important concepts ranging from the boundary layer to supersonic shock waves supersonic wind tunnels and supersonic nozzle design theodore von karman a student of prandtl continued to improve the understanding of supersonic flow other notable figures meyer luigi crocco and ascher shapiro also contributed significantly to the principles considered fundamental to the study of modern gas dynamics many others also contributed to this field accompanying the improved conceptual understanding of gas dynamics in the early 20th century was a public misconception that there existed a barrier to the attainable speed of aircraft commonly referred to as the sound barrier in truth the barrier to supersonic flight was merely a technological one although it was a stubborn barrier to overcome amongst other factors conventional aerofoils saw a dramatic increase in drag coefficient when the flow approached the speed of sound overcoming the larger drag proved difficult with contemporary designs thus the perception of a sound barrier however aircraft design progressed sufficiently to produce the bell x1 piloted by chuck yeager the x1 officially achieved supersonic speed in october 1947historically two parallel paths of research have been followed in order to further gas dynamics knowledge experimental gas dynamics undertakes wind tunnel model experiments and experiments in shock tubes and ballistic ranges with the use of optical techniques to document the findings theoretical gas dynamics considers the equations of motion applied to a variabledensity gas and their solutions much of basic gas dynamics is analytical but in the modern era computational fluid dynamics applies'</li><li>'coherent structures or their decay onto incoherent turbulent structures observed rapid changes lead to the belief that there must be a regenerative cycle that takes place during decay for example after a structure decays the result may be that the flow is now turbulent and becomes susceptible to a new instability determined by the new flow state leading to a new coherent structure being formed it is also possible that structures do not decay and instead distort by splitting into substructures or interacting with other coherent structures lagrangian coherent structures lcss are influential material surfaces that create clearly recognizable patterns in passive tracer distributions advected by an unsteady flow lcss can be classified as hyperbolic locally maximally attracting or repelling material surfaces elliptic material vortex boundaries and parabolic material jet cores these surfaces are generalizations of classical invariant manifolds known in dynamical systems theory to finitetime unsteady flow data this lagrangian perspective on coherence is concerned with structures formed by fluid elements as opposed to the eulerian notion of coherence which considers features in the instantaneous velocity field of the fluid various mathematical techniques have been developed to identify lcss in two and threedimenisonal data sets and have been applied to laboratory experiments numerical simulations and geophysical observations hairpin vortices are found on top of turbulent bulges of the turbulent wall wrapping around the turbulent wall in hairpin shaped loops where the name originates the hairpinshaped vortices are believed to be one of the most important and elementary sustained flow patterns in turbulent boundary layers hairpins are perhaps the simplest structures and models that represent large scale turbulent boundary layers are often constructed by breaking down individual hairpin vortices which could explain most of the features of wall turbulence although hairpin vortices form the basis of simple conceptual models of flow near a wall actual turbulent flows may contain a hierarchy of competing vortices each with their own degree of asymmetry and disturbanceshairpin vortices resemble the horseshoe vortex which exists because of perturbations of small upward motion due to differences in upward flowing velocities depending on the distance from the wall these form multiple packets of hairpin vortices where hairpin packets of different sizes could generate new vortices to add to the packet specifically close to the surface the tail ends of hairpin vortices could gradually converge resulting in provoked eruptions producing new hairpin vortices hence such eruptions are a regenerative process in which they act to create vortices near the surface and eject them out'</li></ul> |
| 25 | <ul><li>'nonhausdorff space it is possible for a sequence to converge to multiple different limits'</li><li>'see for example airy function the essential statement is this one [UNK] − 1 1 e i k x 2 d x π k e i π 4 o 1 k displaystyle int 11eikx2dxsqrt frac pi keipi 4mathcal omathopen leftfrac 1krightmathclose in fact by contour integration it can be shown that the main term on the right hand side of the equation is the value of the integral on the left hand side extended over the range − ∞ ∞ displaystyle infty infty for a proof see fresnel integral therefore it is the question of estimating away the integral over say 1 ∞ displaystyle 1infty this is the model for all onedimensional integrals i k displaystyle ik with f displaystyle f having a single nondegenerate critical point at which f displaystyle f has second derivative 0 displaystyle 0 in fact the model case has second derivative 2 at 0 in order to scale using k displaystyle k observe that replacing k displaystyle k by c k displaystyle ck where c displaystyle c is constant is the same as scaling x displaystyle x by c displaystyle sqrt c it follows that for general values of f ″ 0 0 displaystyle f00 the factor π k displaystyle sqrt pi k becomes 2 π k f ″ 0 displaystyle sqrt frac 2pi kf0 for f ″ 0 0 displaystyle f00 one uses the complex conjugate formula as mentioned before as can be seen from the formula the stationary phase approximation is a firstorder approximation of the asymptotic behavior of the integral the lowerorder terms can be understood as a sum of over feynman diagrams with various weighting factors for well behaved f displaystyle f common integrals in quantum field theory laplaces method method of steepest descent'</li><li>'in mathematical analysis semicontinuity or semicontinuity is a property of extended realvalued functions that is weaker than continuity an extended realvalued function f displaystyle f is upper respectively lower semicontinuous at a point x 0 displaystyle x0 if roughly speaking the function values for arguments near x 0 displaystyle x0 are not much higher respectively lower than f x 0 displaystyle fleftx0right a function is continuous if and only if it is both upper and lower semicontinuous if we take a continuous function and increase its value at a certain point x 0 displaystyle x0 to f x 0 c displaystyle fleftx0rightc for some c 0 displaystyle c0 then the result is upper semicontinuous if we decrease its value to f x 0 − c displaystyle fleftx0rightc then the result is lower semicontinuous the notion of upper and lower semicontinuous function was first introduced and studied by rene baire in his thesis in 1899 assume throughout that x displaystyle x is a topological space and f x → r [UNK] displaystyle fxto overline mathbb r is a function with values in the extended real numbers r [UNK] r ∪ − ∞ ∞ − ∞ ∞ displaystyle overline mathbb r mathbb r cup infty infty infty infty a function f x → r [UNK] displaystyle fxto overline mathbb r is called upper semicontinuous at a point x 0 ∈ x displaystyle x0in x if for every real y f x 0 displaystyle yfleftx0right there exists a neighborhood u displaystyle u of x 0 displaystyle x0 such that f x y displaystyle fxy for all x ∈ u displaystyle xin u equivalently f displaystyle f is upper semicontinuous at x 0 displaystyle x0 if and only if where lim sup is the limit superior of the function f displaystyle f at the point x 0 displaystyle x0 a function f x → r [UNK] displaystyle fxto overline mathbb r is called upper semicontinuous if it satisfies any of the following equivalent conditions 1 the function is upper semicontinuous at every point of its domain 2 all sets f − 1 − ∞ y x ∈ x f x y displaystyle f1infty yxin xfxy with y ∈ r displaystyle yin mathbb r are open in x displaystyle x where − ∞ y t ∈ r [UNK] t y'</li></ul> |
| 29 | <ul><li>'that would represent a desired level of health for the ecosystem examples may include species composition within an ecosystem or the state of habitat conditions based on local observations or stakeholder interviews thresholds can be used to help guide management particularly for a species by looking at the conservation status criteria established by either state or federal agencies and using models such as the minimum viable population size risk analysisa range of threats and disturbances both natural and human often can affect indicators risk is defined as the sensitivity of an indicator to an ecological disturbance several models can be used to assess risk such as population viability analysis monitoringevaluating the effectiveness of the implemented management strategies is very important in determining how management actions are affecting the ecosystem indicators evaluation this final step involves monitoring and assessing data to see how well the management strategies chosen are performing relative to the initial objectives stated the use of simulation models or multistakeholder groups can help to assess management it is important to note that many of these steps for implementing ecosystembased management are limited by the governance in place for a region the data available for assessing ecosystem status and reflecting on the changes occurring and the time frame in which to operate because ecosystems differ greatly and express varying degrees of vulnerability it is difficult to apply a functional framework that can be universally applied these outlined steps or components of ecosystembased management can for the most part be applied to multiple situations and are only suggestions for improving or guiding the challenges involved with managing complex issues because of the greater amount of influences impacts and interactions to account for problems obstacles and criticism often arise within ecosystembased management there is also a need for more data spatially and temporally to help management make sound decisions for the sustainability of the stock being studied the first commonly defined challenge is the need for meaningful and appropriate management units slocombe 1998b noted that these units must be broad and contain value for people in and outside of the protected area for example aberley 1993 suggests the use of bioregions as management units which can allow peoples involvement with that region to come through to define management units as inclusive regions rather that exclusive ecological zones would prevent further limitations created by narrow or restricting political and economic policy created from the units slocombe 1998b suggests that better management units should be flexible and build from existing units and that the biggest challenge is creating truly effect units for managers to compare against another issue is in the creation of administrative bodies they should operate as the essence of ecosystembased management working together towards mutually agreed upon goals gaps in administration or research competing objectives or priorities between management agencies and governments due to overlapping jurisdictions or obscure goals such as sustainability ecosystem'</li><li>'in fluid mechanics potential vorticity pv is a quantity which is proportional to the dot product of vorticity and stratification this quantity following a parcel of air or water can only be changed by diabatic or frictional processes it is a useful concept for understanding the generation of vorticity in cyclogenesis the birth and development of a cyclone especially along the polar front and in analyzing flow in the ocean potential vorticity pv is seen as one of the important theoretical successes of modern meteorology it is a simplified approach for understanding fluid motions in a rotating system such as the earths atmosphere and ocean its development traces back to the circulation theorem by bjerknes in 1898 which is a specialized form of kelvins circulation theorem starting from hoskins et al 1985 pv has been more commonly used in operational weather diagnosis such as tracing dynamics of air parcels and inverting for the full flow field even after detailed numerical weather forecasts on finer scales were made possible by increases in computational power the pv view is still used in academia and routine weather forecasts shedding light on the synoptic scale features for forecasters and researchersbaroclinic instability requires the presence of a potential vorticity gradient along which waves amplify during cyclogenesis vilhelm bjerknes generalized helmholtzs vorticity equation 1858 and kelvins circulation theorem 1869 to inviscid geostrophic and baroclinic fluids ie fluids of varying density in a rotational frame which has a constant angular speed if we define circulation as the integral of the tangent component of velocity around a closed fluid loop and take the integral of a closed chain of fluid parcels we obtain d c d t − [UNK] 1 ρ ∇ p ⋅ d r − 2 ω d a e d t displaystyle frac dcdtoint frac 1rho nabla pcdot mathrm d mathbf r 2omega frac daedt 1where d d t textstyle frac ddt is the time derivative in the rotational frame not inertial frame c displaystyle c is the relative circulation a e displaystyle ae is projection of the area surrounded by the fluid loop on the equatorial plane ρ displaystyle rho is density p displaystyle p is pressure and ω displaystyle omega is the frames angular speed with stokes theorem the first term on the righthandside can be rewritten as d c d t [UNK] a ∇ ρ × ∇ p ρ 2 ⋅ d a − 2 ω d a e d t displaystyle frac dcdtint'</li><li>'sea rifts national geographic 156 680 – 705 ballard robert d 20170321 the eternal darkness a personal history of deepsea exploration hively will new princeton science library ed princeton nj isbn 9780691175621 oclc 982214518cite book cs1 maint location missing publisher link crane kathleen 2003 sea legs tales of a woman oceanographer boulder colo westview press isbn 9780813340043 oclc 51553643 haymon rm 2014 hydrothermal vents at midocean ridges reference module in earth systems and environmental sciences elsevier doi101016b9780124095489090503 isbn 9780124095489 retrieved 20190627 macdonald ken c luyendyk bruce p 1981 the crest of the east pacific rise scientific american 244 5 100 – 117 bibcode1981sciam244e100m doi101038scientificamerican0581100 issn 00368733 jstor 24964420 van dover cindy 2000 the ecology of deepsea hydrothermal vents princeton nj princeton university press isbn 9780691057804 oclc 41548235'</li></ul> |
| 21 | <ul><li>'fruit cultivars with the same rootstock taking up and distributing water and minerals to the whole system those with more than three varieties are known as family trees when it is difficult to match a plant to the soil in a certain field or orchard growers may graft a scion onto a rootstock that is compatible with the soil it may then be convenient to plant a range of ungrafted rootstocks to see which suit the growing conditions best the fruiting characteristics of the scion may be considered later once the most successful rootstock has been identified rootstocks are studied extensively and often are sold with a complete guide to their ideal soil and climate growers determine the ph mineral content nematode population salinity water availability pathogen load and sandiness of their particular soil and select a rootstock which is matched to it genetic testing is increasingly common and new cultivars of rootstock are always being developed axr1 is a grape rootstock once widely used in california viticulture its name is an abbreviation for aramon rupestris ganzin no 1 which in turn is based on its parentage a cross made by a french grape hybridizer named ganzin between aramon a vitis vinifera cultivar and rupestris an american grape species vitis rupestris — also used on its own as rootstock rupestris st george or st george referring to a town in the south of france saint georges dorques where it was popular it achieved a degree of notoriety in california when after decades of recommendation as a preferred rootstock — despite repeated warnings from france and south africa about its susceptibility it had failed in europe in the early 1900s — it ultimately succumbed to phylloxera in the 1980s requiring the replanting of most of napa and sonoma with disastrous financial consequences those who resisted the urge to use axr1 such as david bennion of ridge vineyards saw their vineyards spared from phylloxera damage apple rootstocks are used for apple trees and are often the deciding factor of the size of the tree that is grafted onto the root dwarfing semidwarf semistandard and standard are the size benchmarks for the different sizes of roots that will be grown with the standard being the largest and dwarf being the smallest much of the worlds apple production is now using dwarf rootstocks to improve efficiency increase density and increase yields of fruit per acre the following is a list of the dwarfing rootstock that are commonly used today in apple production malling'</li><li>'or negligently cut destroy mutilate or remove plant material that is growing upon public land or upon land that is not his or hers without a written permit from the owner of the land signed by the owner of the land or the owner ’ s authorized agent as provided in subdivision ” while plant collecting may seem like a very safe and harmless practice there is a few things collectors should keep in mind to protect themselves first collectors should always be aware of the land where they are collecting as in hiking there will be certain limitations to whether or not public access is granted on a plot of land and if collection from that land is allowed for example in a national park of the united states plant collection is not allowed unless given special permission collecting internationally will involve some logistics such as official permits which will most likely be required to bring plants both from the country of collection and to the destination country the major herbaria can be useful to the average hobbyist in aiding them in acquiring these permitsif traveling to a remote location to access samples it is safe practice to inform someone of your whereabouts and planned time of return if traveling in hot weather collectors should bring adequate water to avoid dehydration forms of sun protection such as sunscreen and wide brimmed hats may be essential depending on location travel to remote locations will most likely involve walking measurable distances in wild terrain so precautions synonymous with those related to hiking should be taken plant discovery means the first time that a new plant was recorded for science often in the form of dried and pressed plants a herbarium specimen being sent to a botanical establishment such as kew gardens in london where it would be examined classified and namedplant introduction means the first time that living matter – seed cuttings or a whole plant – was brought back to europe thus the handkerchief tree davidia involucrata was discovered by pere david in 1869 but introduced to britain by ernest wilson in 1901often the two happened simultaneously thus sir joseph hooker discovered and introduced his himalayan rhododendrons between 1849 and 1851 botanical expedition list of irish plant collectors proplifting'</li><li>'a plant cutting is a piece of a plant that is used in horticulture for vegetative asexual propagation a piece of the stem or root of the source plant is placed in a suitable medium such as moist soil if the conditions are suitable the plant piece will begin to grow as a new plant independent of the parent a process known as striking a stem cutting produces new roots and a root cutting produces new stems some plants can be grown from leaf pieces called leaf cuttings which produce both stems and roots the scions used in grafting are also called cuttingspropagating plants from cuttings is an ancient form of cloning there are several advantages of cuttings mainly that the produced offspring are practically clones of their parent plants if a plant has favorable traits it can continue to pass down its advantageous genetic information to its offspring this is especially economically advantageous as it allows commercial growers to clone a certain plant to ensure consistency throughout their crops cuttings are used as a method of asexual reproduction in succulent horticulture commonly referred to as vegetative reproduction a cutting can also be referred to as a propagule succulents have evolved with the ability to use adventitious root formation in reproduction to increase fitness in stressful environments succulents grow in shallow soils rocky soils and desert soils seedlings from sexual reproduction have a low survival rate however plantlets from the excised stem cuttings and leaf cuttings broken off in the natural environment are more successfulcuttings have both water and carbon stored and available which are resources needed for plant establishment the detached part of the plant remains physiologically active allowing mitotic activity and new root structures to form for water and nutrient uptake asexual reproduction of plants is also evolutionarily advantageous as it allows plantlets to be better suited to their environment through retention of epigenetic memory heritable patterns of phenotypic differences that are not due to changes in dna but rather histone modification and dna methylation epigenetic memory is heritable through mitosis and thus advantageous stress response priming is retained in plantlets from excised stem adventitious root formation refers to roots that form from any structure of a plant that is not a root these roots can form as part of normal development or due to a stress response adventitious root formation from the excised stem cutting is a wound response at a molecular level when a cutting is first excised at the stem there is an immediate increase in jasmonic acid known to be necessary'</li></ul> |
| 2 | <ul><li>'do not have any solution such a system is called inconsistent an obvious example is x y 1 0 x 0 y 2 displaystyle begincasesbeginalignedxy10x0y2endalignedendcases as 0 = 2 the second equation in the system has no solution therefore the system has no solution however not all inconsistent systems are recognized at first sight as an example consider the system 4 x 2 y 12 − 2 x − y − 4 displaystyle begincasesbeginaligned4x2y122xy4endalignedendcases multiplying by 2 both sides of the second equation and adding it to the first one results in 0 x 0 y 4 displaystyle 0x0y4 which clearly has no solution undetermined systems there are also systems which have infinitely many solutions in contrast to a system with a unique solution meaning a unique pair of values for x and y for example 4 x 2 y 12 − 2 x − y − 6 displaystyle begincasesbeginaligned4x2y122xy6endalignedendcases isolating y in the second equation y − 2 x 6 displaystyle y2x6 and using this value in the first equation in the system 4 x 2 − 2 x 6 12 4 x − 4 x 12 12 12 12 displaystyle beginaligned4x22x6124x4x12121212endaligned the equality is true but it does not provide a value for x indeed one can easily verify by just filling in some values of x that for any x there is a solution as long as y − 2 x 6 displaystyle y2x6 there is an infinite number of solutions for this system over and underdetermined systems systems with more variables than the number of linear equations are called underdetermined such a system if it has any solutions does not have a unique one but rather an infinitude of them an example of such a system is x 2 y 10 y − z 2 displaystyle begincasesbeginalignedx2y10yz2endalignedendcases when trying to solve it one is led to express some variables as functions of the other ones if any solutions exist but cannot express all solutions numerically because there are an infinite number of them if there are any a system with a higher number of equations than variables is called overdetermined if an overdetermined system has any solutions necessarily some equations are linear combinations of the others history of algebra binary operation gaussian'</li><li>'if the puzzle is prepared so that we should have one only one unique solution we can set that all these variables a b c and e must be 0 otherwise there become more than one solutions some puzzle configurations may allow the player to use partitioning for complexity reduction an example is given in figure 5 each partition corresponds to a number of the objects hidden the sum of the hidden objects in the partitions must be equal to the total number of objects hidden on the board one possible way to determine a partitioning is to choose the lead clue cells which have no common neighbors the cells outside of the red transparent zones in figure 5 must be empty in other words there are no hidden objects in the allwhite cells since there must be a hidden object within the upper partition zone the third row from top shouldnt contain a hidden object this leads to the fact that the two variable cells on the bottom row around the clue cell must have hidden objects the rest of the solution is straightforward at some cases the player can set a variable cell as 1 and check if any inconsistency occurs the example in figure 6 shows an inconsistency check the cell marked with an hidden object δ is under the test its marking leads to the set all the variables grayed cells to be 0 this follows the inconsistency the clue cell marked red with value 1 does not have any remaining neighbor that can include a hidden object therefore the cell under the test must not include a hidden object in algebraic form we have two equations a b c d 1a b c d e f g 1here a b c and d correspond to the top four grayed cells in figure 6 the cell with δ is represented by the variable f and the other two grayed cells are marked as e and g if we set f 1 then a 0 b 0 c 0 d 0 e 0 g 0 the first equation above will have the left hand side equal to 0 while the right hand side has 1 a contradiction tryandcheck may need to be applied consequently in more than one step on some puzzles in order to reach a conclusion this is equivalent to binary search algorithm to eliminate possible paths which lead to inconsistency because of binary variables the equation set for the solution does not possess linearity property in other words the rank of the equation matrix may not always address the right complexity the complexity of this class of puzzles can be adjusted in several ways one of the simplest method is to set a ratio of the number of the clue cells to the total number of the cells on the board however this may result a largely varying'</li><li>'##ner bases implicitly it is used in grouping the terms of a taylor series in several variables in algebraic geometry the varieties defined by monomial equations x α 0 displaystyle xalpha 0 for some set of α have special properties of homogeneity this can be phrased in the language of algebraic groups in terms of the existence of a group action of an algebraic torus equivalently by a multiplicative group of diagonal matrices this area is studied under the name of torus embeddings monomial representation monomial matrix homogeneous polynomial homogeneous function multilinear form loglog plot power law sparse polynomial'</li></ul> |
| 26 | <ul><li>'permeability is a property of foundry sand with respect to how well the sand can vent ie how well gases pass through the sand and in other words permeability is the property by which we can know the ability of material to transmit fluidgases the permeability is commonly tested to see if it is correct for the casting conditions the grain size shape and distribution of the foundry sand the type and quantity of bonding materials the density to which the sand is rammed and the percentage of moisture used for tempering the sand are important factors in regulating the degree of permeability an increase in permeability usually indicates a more open structure in the rammed sand and if the increase continues it will lead to penetrationtype defects and rough castings a decrease in permeability indicates tighter packing and could lead to blows and pinholes on a prepared mould surface as a sample permeability can be checked with use of a mould permeability attachment to permeability meter readings such obtained are of relative permeability and not absolute permeability the relative permeability reading on a mould surface is only used to gauge sampletosample variation on standard specimen as a sample for sands that can be compressed eg bentonitebonded sand also known as green sand a compressed or rammed sample is used to check permeability for sand that cannot be compressed eg resincoated sands a freely filled sample is used to check such a sample user may have to use an attachment to the permeability meter called a core permeability tubethe absolute permeability number which has no units is determined by the rate of flow of air under standard pressure through a rammed cylindrical specimen din standards define the specimen dimensions to be 50 mm in diameter and 50 mm tall while the american foundry society defines it to be two inches in diameter and two inches tall rammed cylindrical specimen formula is pn vxhpxaxt where v volume of air in ml passing through the specimen h height of the specimen in cm a cross sectional area of specimen in cm2 p pressure of air in cm of water t time in minutesamerican foundry society has also released a chart where back pressure p from a rammed specimen placed on a permeability meter is correlated with a permeability number the permeability number so measured is used in foundries for recording permeability value'</li><li>'hardenability is the depth to which a steel is hardened after putting it through a heat treatment process it should not be confused with hardness which is a measure of a samples resistance to indentation or scratching it is an important property for welding since it is inversely proportional to weldability that is the ease of welding a material when a hot steel workpiece is quenched the area in contact with the water immediately cools and its temperature equilibrates with the quenching medium the inner depths of the material however do not cool so rapidly and in workpieces that are large the cooling rate may be slow enough to allow the austenite to transform fully into a structure other than martensite or bainite this results in a workpiece that does not have the same crystal structure throughout its entire depth with a softer core and harder shell the softer core is some combination of ferrite and cementite such as pearlite the hardenability of ferrous alloys ie steels is a function of the carbon content and other alloying elements and the grain size of the austenite the relative importance of the various alloying elements is calculated by finding the equivalent carbon content of the material the fluid used for quenching the material influences the cooling rate due to varying thermal conductivities and specific heats substances like brine and water cool the steel much more quickly than oil or air if the fluid is agitated cooling occurs even more quickly the geometry of the part also affects the cooling rate of two samples of equal volume the one with higher surface area will cool faster the hardenability of a ferrous alloy is measured by a jominy test a round metal bar of standard size indicated in the top image is transformed to 100 austenite through heat treatment and is then quenched on one end with roomtemperature water the cooling rate will be highest at the end being quenched and will decrease as distance from the end increases subsequent to cooling a flat surface is ground on the test piece and the hardenability is then found by measuring the hardness along the bar the farther away from the quenched end that the hardness extends the higher the hardenability this information is plotted on a hardenability graphthe jominy endquench test was invented by walter e jominy 18931976 and al boegehold metallurgists in the research laboratories division of general motors corp in 1937 for his pioneering work in heat treating jominy was recognized by the american society for metals asm with its albert sauveur achievement award in 1944 jominy served as president of'</li><li>'and remelted to be reused the efficiency or yield of a casting system can be calculated by dividing the weight of the casting by the weight of the metal poured therefore the higher the number the more efficient the gating systemrisers there are three types of shrinkage shrinkage of the liquid solidification shrinkage and patternmakers shrinkage the shrinkage of the liquid is rarely a problem because more material is flowing into the mold behind it solidification shrinkage occurs because metals are less dense as a liquid than a solid so during solidification the metal density dramatically increases patternmakers shrinkage refers to the shrinkage that occurs when the material is cooled from the solidification temperature to room temperature which occurs due to thermal contraction solidification shrinkage most materials shrink as they solidify but as the adjacent table shows a few materials do not such as gray cast iron for the materials that do shrink upon solidification the type of shrinkage depends on how wide the freezing range is for the material for materials with a narrow freezing range less than 50 °c 122 °f a cavity known as a pipe forms in the center of the casting because the outer shell freezes first and progressively solidifies to the center pure and eutectic metals usually have narrow solidification ranges these materials tend to form a skin in open air molds therefore they are known as skin forming alloys for materials with a wide freezing range greater than 110 °c 230 °f much more of the casting occupies the mushy or slushy zone the temperature range between the solidus and the liquidus which leads to small pockets of liquid trapped throughout and ultimately porosity these castings tend to have poor ductility toughness and fatigue resistance moreover for these types of materials to be fluidtight a secondary operation is required to impregnate the casting with a lower melting point metal or resinfor the materials that have narrow solidification ranges pipes can be overcome by designing the casting to promote directional solidification which means the casting freezes first at the point farthest from the gate then progressively solidifies toward the gate this allows a continuous feed of liquid material to be present at the point of solidification to compensate for the shrinkage note that there is still a shrinkage void where the final material solidifies but if designed properly this will be in the gating system or riser risers and riser aids risers also known as feeders are the most common way of providing directional solidification it supplies liquid metal to the solidifying casting to compensate for solidification shrinkage for a riser to work properly the riser must solidify after'</li></ul> |
| 7 | <ul><li>'hear it is the par audiometric testing is used to determine hearing sensitivity and is part of a hearing conservation program this testing is part of the hearing conservation program that is used in the identification of significant hearing loss audiometric testing can identify those who have permanent hearing loss this is called noiseinduced permanent threshold shift niptscompleting baseline audiograms and periodically monitoring threshold levels is one way to track any changes in hearing and identify if there is a need to make improvements to the hearing conservation program osha which monitors workplaces in the united states to ensure safe and healthful working conditions specifies that employees should have a baseline audiogram established within 6 months of their first exposure to 85 dba timeweighted average twa if a worker is unable to obtain a baseline audiogram within 6 months of employment hpd is required to be worn if the worker is exposed to 85 dba or above twa hpd must be worn until a baseline audiogram is obtained under the msha which monitors compliance to standards within the mining industry an existing audiogram that meets specific standards can be used for the employees baseline before establishing baseline it is important that the employee limit excessive noise exposure that could potentially cause a temporary threshold shift and affect results of testing osha stipulates that an employee be noisefree for at least 14 hours prior to testingperiodic audiometric monitoring typically completed annually as recommended by osha can identify changes in hearing there are specific criteria that the change must meet in order to require action the criterion most commonly used is the standard threshold shift sts defined by a change of 10 db or greater averaged at 2000 3000 and 4000 hz age correction factors can be applied to the change in order to compensate for hearing loss that is agerelated rather than workrelated if an sts is found osha requires that the employee be notified of this change within 21 days furthermore any employee that is not currently wearing hpd is now required to wear protection if the employee is already wearing protection they should be refit with a new device and retrained on appropriate useanother determination that is made includes whether an sts is “ recordable ” under osha standards meaning the workplace must report the change to osha in order to be recordable the employees new thresholds at 2000 3000 and 4000 hz must exceed an average of 25 db hl msha standard differs slightly in terms of calculation and terminology msha considers whether an sts is “ reportable ” by determining if the average amount of change that occurs exceeds 25 db hl the various measures that are used in occupational audiometric testing'</li><li>'sense classroom program teaches children how hearing works how it can stop working and offers ideas for safe listening the classroom presentation satisfies the requirements for the science unit on sound taught in either grade 3 or 4 as well as the healthy living curriculum in grades 5 and 6 in addition the webpage provides resources games for children parents and teachers hearsmart an australian program initiated by the hearing cooperative research centre and the national acoustic laboratories nal hearsmart aims to improve the hearing health of all australians particularly those at greatest of risk of noiserelated tinnitus and hearing loss the program has a particular focus on promoting healthy hearing habits in musicians live music venues and patrons resources include know your noise an online risk calculator and speechinnoise test a short video that aims to raise awareness of tinnitus in musicians and a comprehensive website with detailed information just as program evaluation is necessary in workplace settings it is also an important component of educational hearing conservation programs to determine if any changes need to be made this evaluation may consist of two main parts assessment of students knowledge and assessment of their skills and behaviors to examine the level of knowledge acquired by the students a questionnaire is often given with the expectation of an 85 competency level among students if proficiency is too low changes should be implemented if the knowledge level is adequate assessing behaviors is then necessary to see if the children are using their newfound knowledge this evaluation can be done through classroom observation of both the students and teachers in noisy classroom environments such as music gym technology etc the mine safety and health administration msha requires that all feasible engineering and administrative controls be employed to reduce miners exposure levels to 90 dba twa the action level for enrollment in a hearing conservation program is 85 dba 8hour twa integrating all sound levels between 80 dba to at least 130 dba msha uses a 5db exchange rate the sound level in decibels that would result in halving if an increase in sound level or a doubling if a decreasein sound level the allowable exposure time to maintain the same noise dose at and above exposure levels of 90 dba twa the miner must wear hearing protection at and above exposure levels above 105 dba twa the miner must wear dual hearing protection miners may not be exposed to sounds exceeding 115 dba with or without hearing protection devices msha defines an sts as an average decrease in auditory sensitivity of 10 db hl at the frequencies 2000 3000 and 4000 hz 30 cfr part 62 the federal railroad administration fra encourages but does not require railroads to use administrative controls that reduce noise exposure duration when the wor'</li><li>'##earlyonset ome is associated with feeding of infants while lying down early entry into group child care parental smoking lack or too short a period of breastfeeding and greater amounts of time spent in group child care particularly those with a large number of children these risk factors increase the incidence and duration of ome during the first two years of life chronic suppurative otitis media csom is a chronic inflammation of the middle ear and mastoid cavity that is characterised by discharge from the middle ear through a perforated tympanic membrane for at least 6 weeks csom occurs following an upper respiratory tract infection that has led to acute otitis media this progresses to a prolonged inflammatory response causing mucosal middle ear oedema ulceration and perforation the middle ear attempts to resolve this ulceration by production of granulation tissue and polyp formation this can lead to increased discharge and failure to arrest the inflammation and to development of csom which is also often associated with cholesteatoma there may be enough pus that it drains to the outside of the ear otorrhea or the pus may be minimal enough to be seen only on examination with an otoscope or binocular microscope hearing impairment often accompanies this disease people are at increased risk of developing csom when they have poor eustachian tube function a history of multiple episodes of acute otitis media live in crowded conditions and attend paediatric day care facilities those with craniofacial malformations such as cleft lip and palate down syndrome and microcephaly are at higher riskworldwide approximately 11 of the human population is affected by aom every year or 709 million cases about 44 of the population develop csomaccording to the world health organization csom is a primary cause of hearing loss in children adults with recurrent episodes of csom have a higher risk of developing permanent conductive and sensorineural hearing loss in britain 09 of children and 05 of adults have csom with no difference between the sexes the incidence of csom across the world varies dramatically where high income countries have a relatively low prevalence while in low income countries the prevalence may be up to three times as great each year 21000 people worldwide die due to complications of csom adhesive otitis media occurs when a thin retracted ear drum becomes sucked into the middleear space and stuck ie adherent to the ossicles and other bones of the middle ear aom is far less common in breastfed infants than in formulafed infants'</li></ul> |
| 27 | <ul><li>'integration into microfluidic systems ie micrototal analytical systems or labonachip structures for instance ncams when incorporated into microfluidic devices can reproducibly perform digital switching allowing transfer of fluid from one microfluidic channel to another selectivity separate and transfer analytes by size and mass mix reactants efficiently and separate fluids with disparate characteristics in addition there is a natural analogy between the fluid handling capabilities of nanofluidic structures and the ability of electronic components to control the flow of electrons and holes this analogy has been used to realize active electronic functions such as rectification and fieldeffect and bipolar transistor action with ionic currents application of nanofluidics is also to nanooptics for producing tuneable microlens arraynanofluidics have had a significant impact in biotechnology medicine and clinical diagnostics with the development of labonachip devices for pcr and related techniques attempts have been made to understand the behaviour of flowfields around nanoparticles in terms of fluid forces as a function of reynolds and knudsen number using computational fluid dynamics the relationship between lift drag and reynolds number has been shown to differ dramatically at the nanoscale compared with macroscale fluid dynamics there are a variety of challenges associated with the flow of liquids through carbon nanotubes and nanopipes a common occurrence is channel blocking due to large macromolecules in the liquid also any insoluble debris in the liquid can easily clog the tube a solution for this researchers are hoping to find is a low friction coating or channel materials that help reduce the blocking of the tubes also large polymers including biologically relevant molecules such as dna often fold in vivo causing blockages typical dna molecules from a virus have lengths of approx 100 – 200 kilobases and will form a random coil of the radius some 700 nm in aqueous solution at 20 this is also several times greater than the pore diameter of even large carbon pipes and two orders of magnitude the diameter of a single walled carbon nanotube nanomechanics nanotechnology microfluidics nanofluidic circuitry'</li><li>'the tomlinson model also known as the prandtl – tomlinson model is one of the most popular models in nanotribology widely used as the basis for many investigations of frictional mechanisms on the atomic scale essentially a nanotip is dragged by a spring over a corrugated energy landscape a frictional parameter η can be introduced to describe the ratio between the energy corrugation and the elastic energy stored in the spring if the tipsurface interaction is described by a sinusoidal potential with amplitude v0 and periodicity a then η 4 π 2 v 0 k a 2 displaystyle eta frac 4pi 2v0ka2 where k is the spring constant if η1 the tip slides continuously across the landscape superlubricity regime if η1 the tip motion consists in abrupt jumps between the minima of the energy landscape stickslip regimethe name tomlinson model is however historically incorrect the paper by tomlinson that is often cited in this context did not contain the model known as the tomlinson model and suggests an adhesive contribution to friction in reality it was ludwig prandtl who suggested in 1928 this model to describe the plastic deformations in crystals as well as the dry friction in the meantime many researchers still call this model the prandtl – tomlinson model in russia this model was introduced by the soviet physicists yakov frenkel and t kontorova the frenkel defect became firmly fixed in the physics of solids and liquids in the 1930s this research was supplemented with works on the theory of plastic deformation their theory now known as the frenkel – kontorova model is important in the study of dislocations'</li><li>'be medical nanorobotics or nanomedicine an area pioneered by robert freitas in numerous books and papers the ability to design build and deploy large numbers of medical nanorobots would at a minimum make possible the rapid elimination of disease and the reliable and relatively painless recovery from physical trauma medical nanorobots might also make possible the convenient correction of genetic defects and help to ensure a greatly expanded lifespan more controversially medical nanorobots might be used to augment natural human capabilities one study has reported on how conditions like tumors arteriosclerosis blood clots leading to stroke accumulation of scar tissue and localized pockets of infection can possibly be addressed by employing medical nanorobots another proposed application of molecular nanotechnology is utility fog — in which a cloud of networked microscopic robots simpler than assemblers would change its shape and properties to form macroscopic objects and tools in accordance with software commands rather than modify the current practices of consuming material goods in different forms utility fog would simply replace many physical objects yet another proposed application of mnt would be phasedarray optics pao however this appears to be a problem addressable by ordinary nanoscale technology pao would use the principle of phasedarray millimeter technology but at optical wavelengths this would permit the duplication of any sort of optical effect but virtually users could request holograms sunrises and sunsets or floating lasers as the mood strikes pao systems were described in bc crandalls nanotechnology molecular speculations on global abundance in the brian wowk article phasedarray optics molecular manufacturing is a potential future subfield of nanotechnology that would make it possible to build complex structures at atomic precision molecular manufacturing requires significant advances in nanotechnology but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more when nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials energy and softwarethe products of molecular manufacturing could range from cheaper massproduced versions of known hightech products to novel products with added capabilities in many areas of application some applications that have been suggested are advanced smart materials nanosensors medical nanorobots and space travel additionally molecular manufacturing could be used to cheaply produce highly advanced durable weapons which is an area of special concern regarding the impact of nanotechnology being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilitiesaccording to chris phoenix and mike treder from the center for responsible nano'</li></ul> |
| 31 | <ul><li>'eight perfections the capacity to offset the force of ones facticity this is defined in relation to pullness or garima which concerns worldly weight and mass zen buddhism teaches that one ought to become as light as being itself zen teaches one not only to find the lightness of being “ bearable ” but to rejoice in this lightness this stands as an interesting opposition to kunderas evaluation of lightness'</li><li>'exact order and studies with children in canada india peru samoa and thailand indicate that they all pass the false belief task at around the same time suggesting that children develop theory of mind consistently around the worldhowever children from iran and china develop theory of mind in a slightly different order although they begin the development of theory of mind around the same time toddlers from these countries understand knowledge access before western children but take longer to understand diverse beliefs researchers believe this swap in the developmental order is related to the culture of collectivism in iran and china which emphasizes interdependence and shared knowledge as opposed to the culture of individualism in western countries which promotes individuality and accepts differing opinions because of these different cultural values iranian and chinese children might take longer to understand that other people have different beliefs and opinions this suggests that the development of theory of mind is not universal and solely determined by innate brain processes but also influenced by social and cultural factors theory of mind can help historians to more properly understand historical figures characters for example thomas jefferson emancipationists like douglas l wilson and scholars at the thomas jefferson foundation view jefferson as an opponent of slavery all his life noting jeffersons attempts within the limited range of options available to him to undermine slavery his many attempts at abolition legislation the manner in which he provided for slaves and his advocacy of their more humane treatment this view contrasts with that of revisionists like paul finkelman who criticizes jefferson for racism slavery and hypocrisy emancipationist views on this hypocrisy recognize that if he tried to be true to his word it would have alienated his fellow virginians in another example franklin d roosevelt did not join naacp leaders in pushing for federal antilynching legislation as he believed that such legislation was unlikely to pass and that his support for it would alienate southern congressmen including many of roosevelts fellow democrats whether children younger than three or four years old have a theory of mind is a topic of debate among researchers it is a challenging question due to the difficulty of assessing what prelinguistic children understand about others and the world tasks used in research into the development of theory of mind must take into account the umwelt of the preverbal child one of the most important milestones in theory of mind development is the ability to attribute false belief in other words to understand that other people can believe things which are not true to do this it is suggested one must understand how knowledge is formed that peoples beliefs are based on their knowledge that mental states can differ from reality and that peoples behavior can be predicted by their mental states numerous versions of false'</li><li>'bodily functions such as heart and liver according to descartes animals only had a body and not a soul which distinguishes humans from animals the distinction between mind and body is argued in meditation vi as follows i have a clear and distinct idea of myself as a thinking nonextended thing and a clear and distinct idea of body as an extended and nonthinking thing whatever i can conceive clearly and distinctly god can so create the central claim of what is often called cartesian dualism in honor of descartes is that the immaterial mind and the material body while being ontologically distinct substances causally interact this is an idea that continues to feature prominently in many noneuropean philosophies mental events cause physical events and vice versa but this leads to a substantial problem for cartesian dualism how can an immaterial mind cause anything in a material body and vice versa this has often been called the problem of interactionism descartes himself struggled to come up with a feasible answer to this problem in his letter to elisabeth of bohemia princess palatine he suggested that spirits interacted with the body through the pineal gland a small gland in the centre of the brain between the two hemispheres the term cartesian dualism is also often associated with this more specific notion of causal interaction through the pineal gland however this explanation was not satisfactory how can an immaterial mind interact with the physical pineal gland because descartes was such a difficult theory to defend some of his disciples such as arnold geulincx and nicolas malebranche proposed a different explanation that all mind – body interactions required the direct intervention of god according to these philosophers the appropriate states of mind and body were only the occasions for such intervention not real causes these occasionalists maintained the strong thesis that all causation was directly dependent on god instead of holding that all causation was natural except for that between mind and body in addition to already discussed theories of dualism particularly the christian and cartesian models there are new theories in the defense of dualism naturalistic dualism comes from australian philosopher david chalmers born 1966 who argues there is an explanatory gap between objective and subjective experience that cannot be bridged by reductionism because consciousness is at least logically autonomous of the physical properties upon which it supervenes according to chalmers a naturalistic account of property dualism requires a new fundamental category of properties described by new laws of supervenience the challenge being analogous to that of understanding electricity based on the mechanistic and newtonian models of materialism prior to maxwell'</li></ul> |
| 12 | <ul><li>'x is equivalent to counting injective functions n → x when n x and also to counting surjective functions n → x when n x counting multisets of size n also known as ncombinations with repetitions of elements in x is equivalent to counting all functions n → x up to permutations of n counting partitions of the set n into x subsets is equivalent to counting all surjective functions n → x up to permutations of x counting compositions of the number n into x parts is equivalent to counting all surjective functions n → x up to permutations of n the various problems in the twelvefold way may be considered from different points of view traditionally many of the problems in the twelvefold way have been formulated in terms of placing balls in boxes or some similar visualization instead of defining functions the set n can be identified with a set of balls and x with a set of boxes the function ƒ n → x then describes a way to distribute the balls into the boxes namely by putting each ball a into box ƒa a function ascribes a unique image to each value in its domain this property is reflected by the property that any ball can go into only one box together with the requirement that no ball should remain outside of the boxes whereas any box can accommodate an arbitrary number of balls requiring in addition ƒ to be injective means forbidding to put more than one ball in any one box while requiring ƒ to be surjective means insisting that every box contain at least one ball counting modulo permutations of n or x is reflected by calling the balls or the boxes respectively indistinguishable this is an imprecise formulation intended to indicate that different configurations are not to be counted separately if one can be transformed into the other by some interchange of balls or of boxes this possibility of transformation is formalized by the action by permutations another way to think of some of the cases is in terms of sampling in statistics imagine a population of x items or people of which we choose n two different schemes are normally described known as sampling with replacement and sampling without replacement in the former case sampling with replacement once weve chosen an item we put it back in the population so that we might choose it again the result is that each choice is independent of all the other choices and the set of samples is technically referred to as independent identically distributed in the latter case however once we have chosen an item we put it aside so that we can not choose it again this means that the act of choosing an'</li><li>'##widehat qshgeq varepsilon 2 where r displaystyle r and s displaystyle s are iid samples of size m displaystyle m drawn according to the distribution p displaystyle p one can view r displaystyle r as the original randomly drawn sample of length m displaystyle m while s displaystyle s may be thought as the testing sample which is used to estimate q p h displaystyle qph permutation since r displaystyle r and s displaystyle s are picked identically and independently so swapping elements between them will not change the probability distribution on r displaystyle r and s displaystyle s so we will try to bound the probability of q r h − q s h ≥ ε 2 displaystyle widehat qrhwidehat qshgeq varepsilon 2 for some h ∈ h displaystyle hin h by considering the effect of a specific collection of permutations of the joint sample x r s displaystyle xrs specifically we consider permutations σ x displaystyle sigma x which swap x i displaystyle xi and x m i displaystyle xmi in some subset of 1 2 m displaystyle 12m the symbol r s displaystyle rs means the concatenation of r displaystyle r and s displaystyle s reduction to a finite class we can now restrict the function class h displaystyle h to a fixed joint sample and hence if h displaystyle h has finite vc dimension it reduces to the problem to one involving a finite function classwe present the technical details of the proof lemma let v x ∈ x m q p h − q x h ≥ ε for some h ∈ h displaystyle vxin xmqphwidehat qxhgeq varepsilon text for some hin h and r r s ∈ x m × x m q r h − q s h ≥ ε 2 for some h ∈ h displaystyle rrsin xmtimes xmwidehat qrhwidehat qshgeq varepsilon 2text for some hin h then for m ≥ 2 ε 2 displaystyle mgeq frac 2varepsilon 2 p m v ≤ 2 p 2 m r displaystyle pmvleq 2p2mr proof by the triangle inequality if q p h − q r h ≥ ε displaystyle qphwidehat qrhgeq varepsilon and q p h − q s h ≤ ε 2 displaystyle qphwidehat qshleq varepsilon 2 then q r h − q s h ≥'</li><li>'of bad events a displaystyle mathcal a we wish to avoid that is determined by a collection of mutually independent random variables p displaystyle mathcal p the algorithm proceeds as follows [UNK] p ∈ p displaystyle forall pin mathcal p v p ← displaystyle vpleftarrow a random evaluation of p while [UNK] a ∈ a displaystyle exists ain mathcal a such that a is satisfied by v p p displaystyle vpmathcal p pick an arbitrary satisfied event a ∈ a displaystyle ain mathcal a [UNK] p ∈ vbl a displaystyle forall pin textvbla v p ← displaystyle vpleftarrow a new random evaluation of p return v p p displaystyle vpmathcal p in the first step the algorithm randomly initializes the current assignment vp for each random variable p ∈ p displaystyle pin mathcal p this means that an assignment vp is sampled randomly and independently according to the distribution of the random variable p the algorithm then enters the main loop which is executed until all events in a displaystyle mathcal a are avoided at which point the algorithm returns the current assignment at each iteration of the main loop the algorithm picks an arbitrary satisfied event a either randomly or deterministically and resamples all the random variables that determine a let p displaystyle mathcal p be a finite set of mutually independent random variables in the probability space ω let a displaystyle mathcal a be a finite set of events determined by these variables if there exists an assignment of reals x a → 0 1 displaystyle xmathcal ato 01 to the events such that [UNK] a ∈ a pr a ≤ x a [UNK] b ∈ γ a 1 − x b displaystyle forall ain mathcal apraleq xaprod bin gamma a1xb then there exists an assignment of values to the variables p displaystyle mathcal p avoiding all of the events in a displaystyle mathcal a moreover the randomized algorithm described above resamples an event a ∈ a displaystyle ain mathcal a at most an expected x a 1 − x a displaystyle frac xa1xa times before it finds such an evaluation thus the expected total number of resampling steps and therefore the expected runtime of the algorithm is at most [UNK] a ∈ a x a 1 − x a displaystyle sum ain mathcal afrac xa1xa the proof of this theorem using the method of entropy compression can be found in the paper by moser and tardos the requirement of an assignment function x satisfying a set of inequalities in the'</li></ul> |
| 36 | <ul><li>'create a redundant phrase for example laser light amplification by stimulated emission of radiation light is light produced by a light amplification process similarly opec countries are two or more member states of the organization of the petroleum exporting countries whereas opec by itself denotes the overall organization pleonasm § bilingual tautological expressions recursive acronym tautology'</li><li>'a sermon when he got on the pulpit he asked do you know what i am going to say the audience replied no so he announced i have no desire to speak to people who dont even know what i will be talking about and left the people felt embarrassed and called him back again the next day this time when he asked the same question the people replied yes so nasreddin said well since you already know what i am going to say i wont waste any more of your time and left now the people were really perplexed they decided to try one more time and once again invited the mullah to speak the following week once again he asked the same question – do you know what i am going to say now the people were prepared and so half of them answered yes while the other half replied no so nasreddin said let the half who know what i am going to say tell it to the half who dont and left whom do you believe a neighbour came to the gate of hodja nasreddins yard the hodja went to meet him outside would you mind hodja the neighbour asked can you lend me your donkey today i have some goods to transport to the next town the hodja didnt feel inclined to lend out the animal to that particular man however so not to seem rude he answered im sorry but ive already lent him to somebody else all of a sudden the donkey could be heard braying loudly behind the wall of the yard but hodja the neighbour exclaimed i can hear it behind that wall whom do you believe the hodja replied indignantly the donkey or your hodja taste the same some children saw nasreddin coming from the vineyard with two baskets full of grapes loaded on his donkey they gathered around him and asked him to give them a taste nasreddin picked up a bunch of grapes and gave each child a grape you have so much but you gave us so little the children whined there is no difference whether you have a basketful or a small piece they all taste the same nasreddin answered and continued on his way nasreddins ring mullah had lost his ring in the living room he searched for it for a while but since he could not find it he went out into the yard and began to look there his wife who saw what he was doing asked mullah you lost your ring in the room why are you looking for it in the yard ” mullah stroked his beard and said the room is too dark and i can ’ t see very well i came out to'</li><li>'uses to investigate for example the nature or definition of ethical concepts such as justice or virtue according to vlastos it has the following steps socrates interlocutor asserts a thesis for example courage is endurance of the soul socrates decides whether the thesis is false and targets for refutation socrates secures his interlocutors agreement to further premises for example courage is a fine thing and ignorant endurance is not a fine thing socrates then argues and the interlocutor agrees these further premises imply the contrary of the original thesis in this case it leads to courage is not endurance of the soul socrates then claims he has shown his interlocutors thesis is false and its negation is trueone elenctic examination can lead to a new more refined examination of the concept being considered in this case it invites an examination of the claim courage is wise endurance of the soul most socratic inquiries consist of a series of elenchi and typically end in puzzlement known as aporia frede points out vlastos conclusion in step 5 above makes nonsense of the aporetic nature of the early dialogues having shown a proposed thesis is false is insufficient to conclude some other competing thesis must be true rather the interlocutors have reached aporia an improved state of still not knowing what to say about the subject under discussion the exact nature of the elenchus is subject to a great deal of debate in particular concerning whether it is a positive method leading to knowledge or a negative method used solely to refute false claims to knowledgew k c guthrie in the greek philosophers sees it as an error to regard the socratic method as a means by which one seeks the answer to a problem or knowledge guthrie claims that the socratic method actually aims to demonstrate ones ignorance socrates unlike the sophists did believe that knowledge was possible but believed that the first step to knowledge was recognition of ones ignorance guthrie writes socrates was accustomed to say that he did not himself know anything and that the only way in which he was wiser than other men was that he was conscious of his own ignorance while they were not the essence of the socratic method is to convince the interlocutor that whereas he thought he knew something in fact he does not socrates generally applied his method of examination to concepts that seem to lack any concrete definition eg the key moral concepts at the time the virtues of piety wisdom temperance courage and justice such an examination challenged the implicit moral beliefs of the interlocutors bringing out inadequacies and inconsistencies in their beliefs and usually resulting in aporia in view of such'</li></ul> |
| 8 | <ul><li>'an integrated architecture with application software portable across an assembly of common hardware modules it has been used in fourth generation jet fighters and the latest generation of airliners military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems the vast array of sensors available to the military is used for whatever tactical means required as with aircraft management the bigger sensor platforms like the e ‑ 3d jstars astor nimrod mra4 merlin hm mk 1 have missionmanagement computers police and ems aircraft also carry sophisticated tactical sensors while aircraft communications provide the backbone for safe flight the tactical systems are designed to withstand the rigors of the battle field uhf vhf tactical 30 – 88 mhz and satcom systems combined with eccm methods and cryptography secure the communications data links such as link 11 16 22 and bowman jtrs and even tetra provide the means of transmitting data such as images targeting information etc airborne radar was one of the first tactical sensors the benefit of altitude providing range has meant a significant focus on airborne radar technologies radars include airborne early warning aew antisubmarine warfare asw and even weather radar arinc 708 and ground trackingproximity radar the military uses radar in fast jets to help pilots fly at low levels while the civil market has had weather radar for a while there are strict rules about using it to navigate the aircraft dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats maritime support aircraft can drop active and passive sonar devices sonobuoys and these are also used to determine the location of enemy submarines electrooptic systems include devices such as the headup display hud forward looking infrared flir infrared search and track and other passive infrared devices passive infrared sensor these are all used to provide imagery and information to the flight crew this imagery is used for everything from search and rescue to navigational aids and target acquisition electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats they can be used to launch devices in some cases automatically to counter direct threats against the aircraft they are also used to determine the state of a threat and identify it the avionics systems in military commercial and advanced models of civilian aircraft are interconnected using an avionics databus common avionics databus protocols with their primary application include aircraft data network adn ethernet derivative for commercial aircraft avionics fullduplex switched ethernet afdx specific implementation of arinc 664 adn for commercial aircraft arinc 429 generic mediumspeed data sharing for private'</li><li>'in the earlier beam systems the signal was turned on and off entirely corresponding to a modulation index of 100 the determination of angle within the beam is based on the comparison of the audible strength of the two signals in ils a more complex system of signals and antennas varies the modulation of two signals across the entire width of the beam pattern the system relies on the use of sidebands secondary frequencies that are created when two different signals are mixed for instance if one takes a radio frequency signal at 10 mhz and mixes that with an audible tone at 2500 hz four signals will be produced at the original signals frequencies of 2500 and 10000000 hertz and sidebands 9997500 and 10002500 hertz the original 2500 hz signals frequency is too low to travel far from an antenna but the other three signals are all radio frequency and can be effectively transmittedils starts by mixing two modulating signals to the carrier one at 90 hz and another at 150 this creates a signal with five radio frequencies in total the carrier and four sidebands this combined signal known as the csb for carrier and sidebands is sent out evenly from an antenna array the csb is also sent into a circuit that suppresses the original carrier leaving only the four sideband signals this signal known as sbo for sidebands only is also sent to the antenna arrayfor lateral guidance known as the localizer the antenna is normally placed centrally at the far end of the runway and consists of multiple antennas in an array normally about the same width of the runway each individual antenna has a particular phase shift and power level applied only to the sbo signal such that the resulting signal is retarded 90 degrees on the left side of the runway and advanced 90 degrees on the right additionally the 150 hz signal is inverted on one side of the pattern another 180 degree shift due to the way the signals mix in space the sbo signals destructively interfere with and almost eliminate each other along the centerline leaving just the csb signal predominating at any other location on either side of the centerline the sbo and csb signals combine in different ways so that one modulating signal predominatesa receiver in front of the array will receive both of these signals mixed together using simple electronic filters the original carrier and two sidebands can be separated and demodulated to extract the original amplitudemodulated 90 and 150 hz signals these are then averaged to produce two direct current dc signals each of these signals represents not the strength of the original signal but the strength of the modulation relative to the carrier which varies across'</li><li>'excessive manoeuvre could not have been performed greatly reducing chances of recovery against this objection airbus has responded that an a320 in the situation of flight 006 never would have fallen out of the air in the first place the envelope protection would have automatically kept it in level flight in spite of the drag of a stalled engine in april 1995 fedex flight 705 a mcdonnell douglas dc1030 was hijacked by a fedex flight engineer who facing a dismissal attempted to hijack the plane and crash it into fedex headquarters so that his family could collect his life insurance policy after being attacked and severely injured the flight crew was able to fight back and land the plane safely in order to keep the attacker off balance and out of the cockpit the crew had to perform extreme maneuvers including a barrel roll and a dive so fast the airplane couldnt measure its airspeed had the crew not been able to exceed the planes flight envelope the crew might not have been successful american airlines flight 587 an airbus a300 crashed in november 2001 when the vertical stabilizer broke off due to excessive rudder inputs made by the pilot a flightenvelope protection system could have prevented this crash though it can still be argued that an override button should be provided for contingencies when the pilots are aware of the need to exceed normal limits us airways flight 1549 an airbus a320 experienced a dual engine failure after a bird strike and subsequently landed safely in the hudson river in january 2009 the ntsb accident report mentions the effect of flight envelope protection the airplane ’ s airspeed in the last 150 feet of the descent was low enough to activate the alphaprotection mode of the airplane ’ s flybywire envelope protection features because of these features the airplane could not reach the maximum angle of attack aoa attainable in pitch normal law for the airplane weight and configuration however the airplane did provide maximum performance for the weight and configuration at that time the flight envelope protections allowed the captain to pull full aft on the sidestick without the risk of stalling the airplane qantas 72 suffered an uncommanded pitchdown due to erroneous data from one of its adiru computers air france flight 447 an airbus a330 entered an aerodynamic stall from which it did not recover and crashed into the atlantic ocean in june 2009 killing all aboard temporary inconsistency between measured speeds likely a result of the obstruction of the pitot tubes by ice crystals caused autopilot disconnection and reconfiguration to alternate law a second consequence of the reconfiguration'</li></ul> |
| 4 | <ul><li>'covariances and can be computed using standard spreadsheet functions regression dilution deming regression a special case with two predictors and independent errors errorsinvariables model gausshelmert model linear regression least squares principal component analysis principal component regression i hnetynkova m plesinger d m sima z strakos and s van huffel the total least squares problem in ax ≈ b a new classification with the relationship to the classical works simax vol 32 issue 3 2011 pp 748 – 770 available as a preprint m plesinger the total least squares problem and reduction of data in ax ≈ b doctoral thesis tu of liberec and institute of computer science as cr prague 2008 phd thesis c c paige z strakos core problems in linear algebraic systems siam j matrix anal appl 27 2006 pp 861 – 875 doi101137040616991 s van huffel and p lemmerling total least squares and errorsinvariables modeling analysis algorithms and applications dordrecht the netherlands kluwer academic publishers 2002 s jo and s w kim consistent normalized least mean square filtering with noisy data matrix ieee trans signal process vol 53 no 6 pp 2112 – 2123 jun 2005 r d degroat and e m dowling the data least squares problem and channel equalization ieee trans signal process vol 41 no 1 pp 407 – 411 jan 1993 s van huffel and j vandewalle the total least squares problems computational aspects and analysis siam publications philadelphia pa 1991 doi10113719781611971002 t abatzoglou and j mendel constrained total least squares in proc ieee int conf acoust speech signal process icassp ’ 87 apr 1987 vol 12 pp 1485 – 1488 p de groen an introduction to total least squares in nieuw archief voor wiskunde vierde serie deel 14 1996 pp 237 – 253 arxivorg g h golub and c f van loan an analysis of the total least squares problem siam j on numer anal 17 1980 pp 883 – 893 doi1011370717073 perpendicular regression of a line at mathpages a r amirisimkooei and s jazaeri weighted total least squares formulated by standard least squares theoryin journal of geodetic science 2 2 113 – 124 2012 1'</li><li>'circle or square of arbitrary size to be specified for example a focalmean operator could be used to compute the mean value of all the cells within 1000 meters a circle of each cell zonal operators functions that operate on regions of identical value these are commonly used with discrete fields also known as categorical coverages where space is partitioned into regions of homogeneous nominal or categorical value of a property such as land cover land use soil type or surface geologic formation unlike local and focal operators zonal operators do not operate on each cell individually instead all of the cells of a given value are taken as input to a single computation with identical output being written to all of the corresponding cells for example a zonalmean operator would take in two layers one with values representing the regions eg dominant vegetation species and another of a related quantitative property eg percent canopy cover for each unique value found in the former grid the software collects all of the corresponding cells in the latter grid computes the arithmetic mean and writes this value to all of the corresponding cells in the output grid global operators functions that summarize the entire grid these were not included in tomlins work and are not technically part of map algebra because the result of the operation is not a raster grid ie it is not closed but a single value or summary table however they are useful to include in the general toolkit of operations for example a globalmean operator would compute the arithmetic mean of all of the cells in the input grid and return a single mean value some also consider operators that generate a new grid by evaluating patterns across the entire input grid as global which could be considered part of the algebra an example of these are the operators for evaluating cost distance several gis software packages implement map algebra concepts including erdas imagine qgis grass gis terrset pcraster and arcgis in tomlins original formulation of cartographic modeling in the map analysis package he designed a simple procedural language around the algebra operators to allow them to be combined into a complete procedure with additional structures such as conditional branching and looping however in most modern implementations map algebra operations are typically one component of a general procedural processing system such as a visual modeling tool or a scripting language for example arcgis implements map algebra in both its visual modelbuilder tool and in python here pythons overloading capability allows simple operators and functions to be used for raster grids for example rasters can be multiplied using the same arithmetic operator used for multiplying numbershere are some examples in mapbasic the scripting language for mapinfo professional demo'</li><li>'computational mathematics is an area of mathematics devoted to the interaction between mathematics and computer computationa large part of computational mathematics consists roughly of using mathematics for allowing and improving computer computation in areas of science and engineering where mathematics are useful this involves in particular algorithm design computational complexity numerical methods and computer algebra computational mathematics refers also to the use of computers for mathematics itself this includes mathematical experimentation for establishing conjectures particularly in number theory the use of computers for proving theorems for example the four color theorem and the design and use of proof assistants computational mathematics emerged as a distinct part of applied mathematics by the early 1950s currently computational mathematics can refer to or include computational science also known as scientific computation or computational engineering solving mathematical problems by computer simulation as opposed to analytic methods of applied mathematics numerical methods used in scientific computation for example numerical linear algebra and numerical solution of partial differential equations stochastic methods such as monte carlo methods and other representations of uncertainty in scientific computation the mathematics of scientific computation in particular numerical analysis the theory of numerical methods computational complexity computer algebra and computer algebra systems computerassisted research in various areas of mathematics such as logic automated theorem proving discrete mathematics combinatorics number theory and computational algebraic topology cryptography and computer security which involve in particular research on primality testing factorization elliptic curves and mathematics of blockchain computational linguistics the use of mathematical and computer techniques in natural languages computational algebraic geometry computational group theory computational geometry computational number theory computational topology computational statistics algorithmic information theory algorithmic game theory mathematical economics the use of mathematics in economics finance and to certain extents of accounting experimental mathematics mathematics portal cucker f 2003 foundations of computational mathematics special volume handbook of numerical analysis northholland publishing isbn 9780444512475 harris j w stocker h 1998 handbook of mathematics and computational science springerverlag isbn 9780387947464 hartmann ak 2009 practical guide to computer simulations world scientific isbn 9789812834157 archived from the original on february 11 2009 retrieved may 3 2012 nonweiler t r 1986 computational mathematics an introduction to numerical approximation john wiley and sons isbn 9780470202609 gentle j e 2007 foundations of computational science springerverlag isbn 9780387004501 white r e 2003 computational mathematics models methods and analysis with matlab chapman and hall isbn 9781584883647 yang x s 2008 introduction to computational mathematics world scientific isbn 9789812818171 strang g 2007 computational science and engineering wiley isbn 9780961408817'</li></ul> |
| 6 | <ul><li>'on graphics processing units many codes and software packages exist along with various researchers and consortia maintaining them most codes tend to be nbody packages or fluid solvers of some sort examples of nbody codes include changa modest nbodylaborg and starlabfor hydrodynamics there is usually a coupling between codes as the motion of the fluids usually has some other effect such as gravity or radiation in astrophysical situations for example for sphnbody there is gadget and swift for gridbasednbody ramses enzo flash and artamuse 2 takes a different approach called noahs ark than the other packages by providing an interface structure to a large number of publicly available astronomical codes for addressing stellar dynamics stellar evolution hydrodynamics and radiative transport millennium simulation eris and bolshoi cosmological simulation are astrophysical supercomputer simulations plasma modeling computational physics theoretical astronomy and theoretical astrophysics center for computational relativity and gravitation university of california highperformance astrocomputing center beginnerintermediate level astrophysics with a pc an introduction to computational astrophysics paul hellings willmannbell 1st english ed edition practical astronomy with your calculator peter duffettsmith cambridge university press 3rd edition 1988advancedgraduate level numerical methods in astrophysics an introduction series in astronomy and astrophysics peter bodenheimer gregory p laughlin michal rozyczka harold w yorke taylor francis 2006 open cluster membership probability based on kmeans clustering algorithm mohamed abd el aziz i m selim a essam exp astron 2016 automatic detection of galaxy type from datasets of galaxies image based on image retrieval approach mohamed abd el aziz i m selim shengwu xiong scientific reports 7 4463 2017journals open access living reviews in computational astrophysics computational astrophysics and cosmology'</li><li>'committee g i taylor estimated the amount of energy that would be released by the explosion of an atomic bomb in air he postulated that for an idealized point source of energy the spatial distributions of the flow variables would have the same form during a given time interval the variables differing only in scale thus the name of the similarity solution this hypothesis allowed the partial differential equations in terms of r the radius of the blast wave and t time to be transformed into an ordinary differential equation in terms of the similarity variable r 5 ρ o t 2 e displaystyle frac r5rho ot2e where ρ o displaystyle rho o is the density of the air and e displaystyle e is the energy thats released by the explosion this result allowed g i taylor to estimate the yield of the first atomic explosion in new mexico in 1945 using only photographs of the blast which had been published in newspapers and magazines the yield of the explosion was determined by using the equation e ρ o t 2 r c 5 displaystyle eleftfrac rho ot2rightleftfrac rcright5 where c displaystyle c is a dimensionless constant that is a function of the ratio of the specific heat of air at constant pressure to the specific heat of air at constant volume the value of c is also affected by radiative losses but for air values of c of 100110 generally give reasonable results in 1950 g i taylor published two articles in which he revealed the yield e of the first atomic explosion which had previously been classified and whose publication was therefore a source of controversywhile nuclear explosions are among the clearest examples of the destructive power of blast waves blast waves generated by exploding conventional bombs and other weapons made from high explosives have been used as weapons of war due to their effectiveness at creating polytraumatic injury during world war ii and the uss involvement in the vietnam war blast lung was a common and often deadly injury improvements in vehicular and personal protective equipment have helped to reduce the incidence of blast lung however as soldiers are better protected from penetrating injury and surviving previously lethal exposures limb injuries eye and ear injuries and traumatic brain injuries have become more prevalent structural behaviour during an explosion depends entirely on the materials used in the construction of the building upon hitting the face of a building the shock front from an explosion is instantly reflected this impact with the structure imparts momentum to exterior components of the building the associated kinetic energy of the moving components must be absorbed or dissipated in order for them to survive generally this is achieved by converting the kinetic energy of the moving component to strain energy in resisting elementstypically'</li><li>'observed to be more elongated than e6 or e7 corresponding to a maximum axis ratio of about 31 the firehose instability is probably responsible for this fact since an elliptical galaxy that formed with an initially more elongated shape would be unstable to bending modes causing it to become rounder simulated dark matter haloes like elliptical galaxies never have elongations greater than about 31 this is probably also a consequence of the firehose instabilitynbody simulations reveal that the bars of barred spiral galaxies often puff up spontaneously converting the initially thin bar into a bulge or thick disk subsystem the bending instability is sometimes violent enough to weaken the bar bulges formed in this way are very boxy in appearance similar to what is often observedthe firehose instability may play a role in the formation of galactic warps stellar dynamics'</li></ul> |
| 37 | <ul><li>'marking go by various names including counterfactuals subjunctives and xmarked conditionals indicative if it is raining in new york then mary is at home counterfactual if it was raining in new york then mary would be at homein older dialects and more formal registers the form were is often used instead of was counterfactuals of this sort are sometimes referred to as wered up conditionals wered up if i were king i could have you thrown in the dungeonthe form were can also be used with an infinitive to form a future less vivid conditional future less vivid if i were to be king i could have you thrown in the dungeoncounterfactuals can also use the pluperfect instead of the past tense conditional perfect if you had called me i would have come in english language teaching conditional sentences are often classified under the headings zero conditional first conditional or conditional i second conditional or conditional ii third conditional or conditional iii and mixed conditional according to the grammatical pattern followed particularly in terms of the verb tenses and auxiliaries used zero conditional refers to conditional sentences that express a factual implication rather than describing a hypothetical situation or potential future circumstance see types of conditional sentence the term is used particularly when both clauses are in the present tense however such sentences can be formulated with a variety of tensesmoods as appropriate to the situation if you dont eat for a long time you become hungry if the alarm goes off theres a fire somewhere in the building if you are going to sit an exam tomorrow go to bed early tonight if aspirins will cure it ill take a couple tonight if you make a mistake someone lets you knowthe first of these sentences is a basic zero conditional with both clauses in the present tense the fourth is an example of the use of will in a condition clause for more such cases see below the use of verb tenses moods and aspects in the parts of such sentences follows general principles as described in uses of english verb forms occasionally mainly in a formal and somewhat archaic style a subjunctive is used in the zeroconditional condition clause as in if the prisoner be held for more than five days for more details see english subjunctive see also § inversion in condition clauses below first conditional or conditional i refers to a pattern used in predictive conditional sentences ie those that concern consequences of a probable future event see types of conditional sentence in the basic first conditional pattern the condition is expressed using the present tense having future meaning in this context in some common fixed expressions or in oldfashioned or'</li><li>'introduction in gary ostertag ed definite descriptions a reader cambridge ma mit press 134 russell bertrand 1905 on denoting mind 14 479493 wettstein howard 1981 demonstrative reference and definite descriptions philosophical studies 40 241257 wilson george m 1991 reference and pronominal descriptions journal of philosophy 88 359387'</li><li>'this means that the source text is composed of logical formulas belonging to one logical system and the goal is to associate them with logical formulas belonging to another logical system for example the formula [UNK] a x displaystyle box ax in modal logic can be translated into firstorder logic using the formula [UNK] y r x y → a y displaystyle forall yrxyto ay natural language formalization starts with a sentence in natural language and translates it into a logical formula its goal is to make the logical structure of natural language sentences and arguments explicit it is mainly concerned with their logical form while their specific content is usually ignored logical analysis is a closely related term that refers to the process of uncovering the logical form or structure of a sentence natural language formalization makes it possible to use formal logic to analyze and evaluate natural language arguments this is especially relevant for complex arguments which are often difficult to evaluate without formal tools logic translation can also be used to look for new arguments and thereby guide the reasoning process the reverse process of formalization is sometimes called verbalization it happens when logical formulas are translated back into natural language this process is less nuanced and discussions concerning the relation between natural language and logic usually focus on the problem of formalizationthe success of applications of formal logic to natural language requires that the translation is correct a formalization is correct if its explicit logical features fit the implicit logical features of the original sentence the logical form of ordinary language sentences is often not obvious since there are many differences between natural languages and the formal languages used by logicians this poses various difficulties for formalization for example ordinary expressions frequently include vague and ambiguous expressions for this reason the validity of an argument often depends not just on the expressions themselves but also on how they are interpreted for example the sentence donkeys have ears could mean that all donkeys without exception have ears or that donkeys typically have ears the second translation does not exclude the existence of some donkeys without ears this difference matters for whether a universal quantifier can be used to translate the sentence such ambiguities are not found in the precise formulations of artificial logical languages and have to be solved before translation is possiblethe problem of natural language formalization has various implications for the sciences and humanities especially for the fields of linguistics cognitive science and computer science in the field of formal linguistics for example richard montague provides various suggestions for how to formalize english language expressions in his theory of universal grammar formalization is also discussed in the philosophy of logic in relation to its role in understanding and applying logic if logic is understood as the theory of valid'</li></ul> |
| 10 | <ul><li>'sabiork system for the analysis of biochemical pathways reaction kinetics is a webaccessible database storing information about biochemical reactions and their kinetic properties sabiork comprises a reactionoriented representation of quantitative information on reaction dynamics based on a given selected publication this comprises all available kinetic parameters together with their corresponding rate equations as well as kinetic law and parameter types and experimental and environmental conditions under which the kinetic data were determined additionally sabiork contains information about the underlying biochemical reactions and pathways including their reaction participants cellular location and detailed information about the enzymes catalysing the reactions the data stored in sabiork in a comprehensive manner is mainly extracted manually from literature this includes reactions their participants substrates products modifiers inhibitors activators cofactors catalyst details eg ec enzyme classification protein complex composition wild type mutant information kinetic parameters together with corresponding rate equation biological sources organism tissue cellular location environmental conditions ph temperature buffer and reference details data are adapted normalized and annotated to controlled vocabularies ontologies and external data sources including kegg uniprot chebi pubchem ncbi reactome brenda metacyc biomodels and pubmed as of october 2021 sabiork contains about 71000 curated single entries extracted from more than 7300 publications several tools databases and workflows in systems biology make use of sabiork biochemical reaction data by integration into their framework including sycamore memork celldesigner peroxisomedbtaverna workflows or tools like kineticswizard software for data capture and analysis additionally sabiork is part of miriam registry a set of guidelines for the annotation and curation of computational models the usage of sabiork is free of charge commercial users need a license sabiork offers several ways for data access a browserbased interface restfulbased web services for programmatic accessresult data sets can be exported in different formats including sbml biopaxsbpax and table format sabiork homepage'</li><li>'lipid microdomains are formed when lipids undergo lateral phase separations yielding stable coexisting lamellar domains these phase separations can be induced by changes in temperature pressure ionic strength or by the addition of divalent cations or proteins the question of whether such lipid microdomains observed in model lipid systems also exist in biomembranes had motivated considerable research efforts lipid domains are not readily isolated and examined as unique species in contrast to the examples of lateral heterogeneity one can disrupt the membrane and demonstrate a heterogeneous range of composition in the population of the resulting vesicles or fragments electron microscopy can also be used to demonstrate lateral inhomogeneities in biomembranes often lateral heterogeneity has been inferred from biophysical techniques where the observed signal indicates multiple populations rather than the expected homogeneous population an example of this is the measurement of the diffusion coefficient of a fluorescent lipid analog in soybean protoplasts membrane microheterogeneity is sometimes inferred from the behavior of enzymes where the enzymatic activity does not appear to be correlated with the average lipid physical state exhibited by the bulk of the membrane often the methods suggest regions with different lipid fluidity as would be expected of coexisting gel and liquid crystalline phases within the biomembrane this is also the conclusion of a series of studies where differential effects of perturbation caused by cis and trans fatty acids are interpreted in terms of preferential partitioning of the two liquid crystalline and gellike domains biochemistry essential fatty acid lipid raft pip2 domain lipid signaling saturated and unsaturated compounds'</li><li>'ed new york mcgrawhill isbn 9780071624428 whalen k 2014 lippincott illustrated reviews pharmacology'</li></ul> |
| 33 | <ul><li>'belief in psi than healthy adults some scientists have investigated possible neurocognitive processes underlying the formation of paranormal beliefs in a study pizzagalli et al 2000 data demonstrated that subjects differing in their declared belief in and experience with paranormal phenomena as well as in their schizotypal ideation as determined by a standardized instrument displayed differential brain electric activity during resting periods another study schulter and papousek 2008 wrote that paranormal belief can be explained by patterns of functional hemispheric asymmetry that may be related to perturbations during fetal developmentit was also realized that people with higher dopamine levels have the ability to find patterns and meanings where there are not any this is why scientists have connected high dopamine levels with paranormal belief some scientists have criticized the media for promoting paranormal claims in a report by singer and benassi in 1981 they wrote that the media may account for much of the near universality of paranormal belief as the public are constantly exposed to films newspapers documentaries and books endorsing paranormal claims while critical coverage is largely absent according to paul kurtz in regard to the many talk shows that constantly deal with paranormal topics the skeptical viewpoint is rarely heard and when it is permitted to be expressed it is usually sandbagged by the host or other guests kurtz described the popularity of public belief in the paranormal as a quasireligious phenomenon a manifestation of a transcendental temptation a tendency for people to seek a transcendental reality that cannot be known by using the methods of science kurtz compared this to a primitive form of magical thinkingterence hines has written that on a personal level paranormal claims could be considered a form of consumer fraud as people are being induced through false claims to spend their money — often large sums — on paranormal claims that do not deliver what they promise and uncritical acceptance of paranormal belief systems can be damaging to society while the existence of paranormal phenomena is controversial and debated passionately by both proponents of the paranormal and by skeptics surveys are useful in determining the beliefs of people in regards to paranormal phenomena these opinions while not constituting scientific evidence for or against may give an indication of the mindset of a certain portion of the population at least among those who answered the polls the number of people worldwide who believe in parapsychological powers has been estimated to be 3 to 4 billiona survey conducted in 2006 by researchers from australias monash university sought to determine the types of phenomena that people claim to have experienced and the effects these experiences have had on their lives the study was conducted as an'</li><li>'readily tested at random in 1969 helmut schmidt introduced the use of highspeed random event generators reg for precognition testing and experiments were also conducted at the princeton engineering anomalies research lab once again flaws were found in all of schmidts experiments when the psychologist c e m hansel found that several necessary precautions were not takensf writer philip k dick believed that he had precognitive experiences and used the idea in some of his novels especially as a central plot element in his 1956 science fiction short story the minority report and in his 1956 novel the world jones madein 1963 the bbc television programme monitor broadcast an appeal by the writer jb priestley for experiences which challenged our understanding of time he received hundreds of letters in reply and believed that many of them described genuine precognitive dreams in 2014 the bbc radio 4 broadcaster francis spufford revisited priestleys work and its relation to the ideas of jw dunnein 1965 g w lambert a former council member of the spr proposed five criteria that needed to be met before an account of a precognitive dream could be regarded as credible the dream should be reported to a credible witness before the event the time interval between the dream and the event should be short the event should be unexpected at the time of the dream the description should be of an event destined literally and not symbolically to happen the details of dream and event should tallydavid ryback a psychologist in atlanta used a questionnaire survey approach to investigate precognitive dreaming in college students during the 1980s his survey of over 433 participants showed that 290 or 669 per cent reported some form of paranormal dream he rejected many of these reports but claimed that 88 per cent of the population was having actual precognitive dreams in 2011 the psychologist daryl bem a professor emeritus at cornell university published findings showing statistical evidence for precognition in the journal of personality and social psychology the paper was heavily criticised and the criticism widened to include the journal itself and the validity of the peerreview process in 2012 an independent attempt to reproduce bems results was published but it failed to do so the widespread controversy led to calls for improvements in practice and for more research claims of precognition are like any other claims open to scientific criticism however the nature of the criticism must adapt to the nature of the claim claims of precognition are criticised on three main grounds there is no known scientific mechanism which would allow precognition it breaks temporal causality in that the precognised event causes an effect in the subject prior to the event'</li><li>'mental radio does it work and how 1930 was written by the american author upton sinclair and initially selfpublished this book documents sinclairs test of psychic abilities of mary craig sinclair his second wife while she was in a state of profound depression with a heightened interest in the occult she attempted to duplicate 290 pictures which were drawn by her brother sinclair claimed mary successfully duplicated 65 of them with 155 partial successes and 70 failures in spite of the authors best efforts the experiments were not conducted in a controlled scientific environmentthe german edition included a preface written by albert einstein who admired the book and praised sinclairs writing abilities the psychical researcher walter franklin prince conducted an independent analysis of the results in 1932 he believed that telepathy had been demonstrated in sinclairs data princes analysis was published as the sinclair experiments for telepathy in part i of bulletin xvi of the boston society for psychical research in april 1932 and was included in the addendum for the book on the subject of occult and pseudoscience topics sinclair has been described as credulous martin gardner wrote as mental radio stands it is a highly unsatisfactory account of conditions surrounding the clairvoyancy tests throughout his entire life sinclair has been a gullible victim of mediums and psychics gardner also wrote the possibility of sensory leakage during the experiment had not been ruled out in the first place an intuitive wife who knows her husband intimately may be able to guess with a fair degree of accuracy what he is likely to draw — particularly if the picture is related to some freshly recalled event the two experienced in common at first simple pictures like chairs and tables would likely predominate but as these are exhausted the field of choice narrows and pictures are more likely to be suggested by recent experiences it is also possible that sinclair may have given conversational hints during some of the tests — hints which in his strong will to believe he would promptly forget about also one must not rule out the possibility that in many tests made across the width of a room mrs sinclair may have seen the wiggling of the top of a pencil or arm movements which would convey to her unconscious a rough notion of the drawing when mrs sinclair was tested by william mcdougall under better precautions the results were less than satisfactory leon harris 1975 upton sinclair american rebel crowell'</li></ul> |
| 23 | <ul><li>'the infant is considered safe high caffeine intake by breastfeeding mothers may cause their infants to become irritable or have trouble sleeping a metaanalysis has shown that breastfeeding mothers who smoke expose their infants to nicotine which may cause respiratory illnesses including otitis media in the nursing infant there is a commercial market for human breast milk both in the form of a wet nurse service and as a milk product as a product breast milk is exchanged by human milk banks as well as directly between milk donors and customers as mediated by websites on the internet human milk banks generally have standardized measures for screening donors and storing the milk sometimes even offering pasteurization while milk donors on websites vary in regard to these measures a study in 2013 came to the conclusion that 74 of breast milk samples from providers found from websites were colonized with gramnegative bacteria or had more than 10000 colonyforming unitsml of aerobic bacteria bacterial growth happens during transit according to the fda bad bacteria in food at room temperature can double every 20 minuteshuman milk is considered to be healthier than cows milk and infant formula when it comes to feeding an infant in the first six months of life but only under extreme situations do international health organizations support feeding an infant breast milk from a healthy wet nurse rather than that of its biological mother one reason is that the unregulated breast milk market is fraught with risks such as drugs of abuse and prescription medications being present in donated breast milk the transmission of these substances through breast milk can do more harm than good when it comes to the health outcomes of the infant recipient a 2015 cbs article cites an editorial led by dr sarah steele in the journal of the royal society of medicine in which they say that health claims do not stand up clinically and that raw human milk purchased online poses many health risks cbs found a study from the center for biobehavioral health at nationwide childrens hospital in columbus that found that 11 out of 102 breast milk samples purchased online were actually blended with cows milk the article also explains that milk purchased online may be improperly sanitized or stored so it may contain foodborne illness and infectious diseases such as hepatitis and hiv a minority of people including restaurateurs hans lochen of switzerland and daniel angerer of austria who operates a restaurant in new york city have used human breast milk or at least advocated its use as a substitute for cows milk in dairy products and food recipes an icecreamist in londons covent garden started selling an ice cream named baby gaga in february 2011 each serving cost £14 all the milk was'</li><li>'has been estimated that humans generate about 10 billion different antibodies each capable of binding a distinct epitope of an antigen although a huge repertoire of different antibodies is generated in a single individual the number of genes available to make these proteins is limited by the size of the human genome several complex genetic mechanisms have evolved that allow vertebrate b cells to generate a diverse pool of antibodies from a relatively small number of antibody genes the chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody — the chromosome region containing heavy chain genes igh is found on chromosome 14 and the loci containing lambda and kappa light chain genes igl and igk are found on chromosomes 22 and 2 in humans one of these domains is called the variable domain which is present in each heavy and light chain of every antibody but can differ in different antibodies generated from distinct b cells differences between the variable domains are located on three loops known as hypervariable regions hv1 hv2 and hv3 or complementaritydetermining regions cdr1 cdr2 and cdr3 cdrs are supported within the variable domains by conserved framework regions the heavy chain locus contains about 65 different variable domain genes that all differ in their cdrs combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability this combination is called vdj recombination discussed below somatic recombination of immunoglobulins also known as vdj recombination involves the generation of a unique immunoglobulin variable region the variable region of each immunoglobulin heavy or light chain is encoded in several pieces — known as gene segments subgenes these segments are called variable v diversity d and joining j segments v d and j segments are found in ig heavy chains but only v and j segments are found in ig light chains multiple copies of the v d and j gene segments exist and are tandemly arranged in the genomes of mammals in the bone marrow each developing b cell will assemble an immunoglobulin variable region by randomly selecting and combining one v one d and one j gene segment or one v and one j segment in the light chain as there are multiple copies of each type of gene segment and different combinations of gene segments can be used to generate each immunoglobulin variable region this process generates a huge number of antibodies each with different paratopes and thus different antigen specific'</li><li>'##lin a3 is further metabolized by soluble epoxide hydrolase 2 seh to 8r11r12rtrihydroxy5z9e14zeicosatetraenoic acid 12rhpete also spontaneously decomposes to a mixture of hepoxilins and trihydroxyeicosatetraenoic acids that possess r or s hydroxy and epoxy residues at various sites while 8rhydroxy11r12repoxyhepoxilin a3 spontaneously decomposes to 8r11r12rtrihydroxy5z9e14zeicosatetraenoic acid these decompositions may occur during tissue isolation procedures recent studies indicate that the metabolism by aloxe3 of the r stereoisomer of 12hpete made by alox12b and therefore possibly the s stereoisomer of 12hpete made by alox12 or alox15 is responsible for forming various hepoxilins in the epidermis of human and mouse skin and tongue and possibly other tissueshuman skin metabolizes 12shpete in reactions strictly analogous to those of 12rhpete it metabolized 12shpete by elox3 to 8rhydroxy11s12sepoxy5z9e14zeicosatetraenoic acid and 12oxoete with the former product then being metabolized by seh to 8r11s12strihydroxy5z9e14zeicosatetraenoic acid 12shpete also spontaneously decomposes to a mixture of hepoxilins and trihydroxyeicosatetraenoic acids trioxilins that possess r or s hydroxy and rs or sr epoxide residues at various sites while 8rhydroxy11s12sepoxyhepoxilin a3 spontaneously decomposes to 8r11s12strihydroxy5z9e14zeicosatetraenoic acidin other tissues and animal species numerous hepoxilins form but the hepoxilin synthase activity responsible for their formation is variable hepoxilin a3 8rshydroxy1112epoxy5z9e14zeicosatrienoic acid and hepoxilin b3 10rshydroxy1112epxoy5z8z14zeicosatrienoic acid refer to a mixture of diastereomers and⁄or enantiomers derived from arachidonic acid'</li></ul> |
| 39 | <ul><li>'joule heating also known as resistive resistance or ohmic heating is the process by which the passage of an electric current through a conductor produces heat joules first law also just joules law also known in countries of the former ussr as the joule – lenz law states that the power of heating generated by an electrical conductor equals the product of its resistance and the square of the current joule heating affects the whole electric conductor unlike the peltier effect which transfers heat from one electrical junction to another jouleheating or resistiveheating is used in multiple devices and industrial process the part that converts electricity into heat is called a heating element among the many practical uses are an incandescent light bulb glows when the filament is heated by joule heating due to thermal radiation also called blackbody radiation electric fuses are used as a safety breaking the circuit by melting if enough current flows to melt them electronic cigarettes vaporize propylene glycol and vegetable glycerine by joule heating multiple heating devices use joule heating such as electric stoves electric heaters soldering irons cartridge heaters some food processing equipment may make use of joule heating running current through food material which behave as an electrical resistor causes heat release inside the food the alternating electrical current coupled with the resistance of the food causes the generation of heat a higher resistance increases the heat generated ohmic heating allows for fast and uniform heating of food products which maintains quality products with particulates heat up faster compared to conventional heat processing due to higher resistance james prescott joule first published in december 1840 an abstract in the proceedings of the royal society suggesting that heat could be generated by an electrical current joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period by varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the immersed wirein 1841 and 1842 subsequent experiments showed that the amount of heat generated was proportional to the chemical energy used in the voltaic pile that generated the template this led joule to reject the caloric theory at that time the dominant theory in favor of the mechanical theory of heat according to which heat is another form of energyresistive heating was independently studied by heinrich lenz in 1842the si unit of energy was subsequently named the joule and given the symbol j the commonly known unit of power the watt is equivalent to one joule per second joule'</li><li>'timetranslation symmetry or temporal translation symmetry tts is a mathematical transformation in physics that moves the times of events through a common interval timetranslation symmetry is the law that the laws of physics are unchanged ie invariant under such a transformation timetranslation symmetry is a rigorous way to formulate the idea that the laws of physics are the same throughout history timetranslation symmetry is closely connected via noethers theorem to conservation of energy in mathematics the set of all time translations on a given system form a lie group there are many symmetries in nature besides time translation such as spatial translation or rotational symmetries these symmetries can be broken and explain diverse phenomena such as crystals superconductivity and the higgs mechanism however it was thought until very recently that timetranslation symmetry could not be broken time crystals a state of matter first observed in 2017 break timetranslation symmetry symmetries are of prime importance in physics and are closely related to the hypothesis that certain physical quantities are only relative and unobservable symmetries apply to the equations that govern the physical laws eg to a hamiltonian or lagrangian rather than the initial conditions values or magnitudes of the equations themselves and state that the laws remain unchanged under a transformation if a symmetry is preserved under a transformation it is said to be invariant symmetries in nature lead directly to conservation laws something which is precisely formulated by noethers theorem to formally describe timetranslation symmetry we say the equations or laws that describe a system at times t displaystyle t and t τ displaystyle ttau are the same for any value of t displaystyle t and τ displaystyle tau for example considering newtons equation m x ¨ − d v d x x displaystyle mddot xfrac dvdxx one finds for its solutions x x t displaystyle xxt the combination 1 2 m x [UNK] t 2 v x t displaystyle frac 12mdot xt2vxt does not depend on the variable t displaystyle t of course this quantity describes the total energy whose conservation is due to the timetranslation invariance of the equation of motion by studying the composition of symmetry transformations eg of geometric objects one reaches the conclusion that they form a group and more specifically a lie transformation group if one considers continuous finite symmetry transformations different symmetries form different groups with different geometries time independent hamiltonian systems form a group of time translations that is described by the noncompact abelian lie group r displaystyle mathbb r tts'</li><li>'mass does not depend on δ e displaystyle delta e the entropy is thus a measure of the uncertainty about exactly which quantum state the system is in given that we know its energy to be in some interval of size δ e displaystyle delta e deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above definition of entropy implies that for reversible processes we have d s δ q t displaystyle dsfrac delta qt the fundamental assumption of statistical mechanics is that all the ω e displaystyle omega lefteright states at a particular energy are equally likely this allows us to extract all the thermodynamical quantities of interest the temperature is defined as 1 k t ≡ β ≡ d log ω e d e displaystyle frac 1ktequiv beta equiv frac dlog leftomega lefterightrightde this definition can be derived from the microcanonical ensemble which is a system of a constant number of particles a constant volume and that does not exchange energy with its environment suppose that the system has some external parameter x that can be changed in general the energy eigenstates of the system will depend on x according to the adiabatic theorem of quantum mechanics in the limit of an infinitely slow change of the systems hamiltonian the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in the generalized force x corresponding to the external parameter x is defined such that x d x displaystyle xdx is the work performed by the system if x is increased by an amount dx eg if x is the volume then x is the pressure the generalized force for a system known to be in energy eigenstate e r displaystyle er is given by x − d e r d x displaystyle xfrac derdx since the system can be in any energy eigenstate within an interval of δ e displaystyle delta e we define the generalized force for the system as the expectation value of the above expression x − ⟨ d e r d x ⟩ displaystyle xleftlangle frac derdxrightrangle to evaluate the average we partition the ω e displaystyle omega e energy eigenstates by counting how many of them have a value for d e r d x displaystyle frac derdx within a range between y displaystyle y and y δ y displaystyle ydelta y calling this number ω y e displaystyle omega yleft'</li></ul> |
| 9 | <ul><li>'in microbiology the multiplicity of infection or moi is the ratio of agents eg phage or more generally virus bacteria to infection targets eg cell for example when referring to a group of cells inoculated with virus particles the moi is the ratio of the number of virus particles to the number of target cells present in a defined space the actual number of viruses or bacteria that will enter any given cell is a stochastic process some cells may absorb more than one infectious agent while others may not absorb any before determining the multiplicity of infection its absolutely necessary to have a wellisolated agent as crude agents may not produce reliable and reproducible results the probability that a cell will absorb n displaystyle n virus particles or bacteria when inoculated with an moi of m displaystyle m can be calculated for a given population using a poisson distribution this application of poissons distribution was applied and described by ellis and delbruck p n m n ⋅ e − m n displaystyle pnfrac mncdot emn where m displaystyle m is the multiplicity of infection or moi n displaystyle n is the number of infectious agents that enter the infection target and p n displaystyle pn is the probability that an infection target a cell will get infected by n displaystyle n infectious agents in fact the infectivity of the virus or bacteria in question will alter this relationship one way around this is to use a functional definition of infectious particles rather than a strict count such as a plaque forming unit for virusesfor example when an moi of 1 1 infectious viral particle per cell is used to infect a population of cells the probability that a cell will not get infected is p 0 3679 displaystyle p03679 and the probability that it be infected by a single particle is p 1 3679 displaystyle p13679 by two particles is p 2 1839 displaystyle p21839 by three particles is p 3 613 displaystyle p3613 and so on the average percentage of cells that will become infected as a result of inoculation with a given moi can be obtained by realizing that it is simply p n 0 1 − p 0 displaystyle pn01p0 hence the average fraction of cells that will become infected following an inoculation with an moi of m displaystyle m is given by p n 0 1 − p n 0 1 − m 0 ⋅ e − m 0 1 − e − m displaystyle pn01pn01frac m0cdot em01em which is approximately equal to'</li><li>'use of a mam targeting adhesion inhibitor was shown to significantly decrease the colonization of burn wounds by multidrug resistant pseudomonas aeruginosa in rats n gonorrhoeae is host restricted almost entirely to humans extensive studies have established type 4 fimbrial adhesins of n gonorrhoeae virulence factors these studies have shown that only strains capable of expressing fimbriae are pathogenic high survival of polymorphonuclear neutrophils pmns characterizes neisseria gonorrhoeae infections additionally recent studies out of stockholm have shown that neisseria can hitchhike on pmns using their adhesin pili thus hiding them from neutrophil phagocytic activity this action facilitates the spread of the pathogen throughout the epithelial cell layer escherichia coli strains most known for causing diarrhea can be found in the intestinal tissue of pigs and humans where they express the k88 and cfa1 to attach to the intestinal lining additionally upec causes about 90 of urinary tract infections of those e coli which cause utis 95 express type 1 fimbriae fimh in e coli overcomes the antibody based immune response by natural conversion from the high to the low affinity state through this conversion fimh adhesion may shed the antibodies bound to it escherichia coli fimh provides an example of conformation specific immune response which enhances impact on the protein by studying this particular adhesion researchers hope to develop adhesionspecific vaccines which may serve as a model for antibodymediation of pathogen adhesion fungal adhesin trimeric autotransporter adhesins taa'</li><li>'the ziehlneelsen stain also known as the acidfast stain is a bacteriological staining technique used in cytopathology and microbiology to identify acidfast bacteria under microscopy particularly members of the mycobacterium genus this staining method was initially introduced by paul ehrlich 1854 – 1915 and subsequently modified by the german bacteriologists franz ziehl 1859 – 1926 and friedrich neelsen 1854 – 1898 during the late 19th century the acidfast staining method in conjunction with auramine phenol staining serves as the standard diagnostic tool and is widely accessible for rapidly diagnosing tuberculosis caused by mycobacterium tuberculosis and other diseases caused by atypical mycobacteria such as leprosy caused by mycobacterium leprae and mycobacterium aviumintracellulare infection caused by mycobacterium avium complex in samples like sputum gastric washing fluid and bronchoalveolar lavage fluid these acidfast bacteria possess a waxy lipidrich outer layer that contains high concentrations of mycolic acid rendering them resistant to conventional staining techniques like the gram stainafter the ziehlneelsen staining procedure using carbol fuchsin acidfast bacteria are observable as vivid red or pink rods set against a blue or green background depending on the specific counterstain used such as methylene blue or malachite green respectively nonacidfast bacteria and other cellular structures will be colored by the counterstain allowing for clear differentiation in anatomic pathology specimens immunohistochemistry and modifications of ziehl – neelsen staining such as fitefaraco staining have comparable diagnostic utility in identifying mycobacterium both of them are superior to traditional ziehl – neelsen stainmycobacterium are slowgrowing rodshaped bacilli that are slightly curved or straight and are considered to be gram positive some mycobacteria are freeliving saprophytes but many are pathogens that cause disease in animals and humans mycobacterium bovis causes tuberculosis in cattle since tuberculosis can be spread to humans milk is pasteurized to kill any of the bacteria mycobacterium tuberculosis that causes tuberculosis tb in humans is an airborne bacterium that typically infects the human lungs testing for tb includes blood testing skin tests and chest xrays when looking at the smears for tb it is stained using an acidfast stain these'</li></ul> |
| 35 | <ul><li>'aeolian origin of the loesses was recognized later virlet daoust 1857 particularly due to the convincing observations of loesses in china by ferdinand von richthofen 1878 a tremendous number of papers have been published since then focusing on the formation of loesses and on loesspaleosol older soil buried under deposits sequences as the archives of climate and environment change these water conservation works have been carried out extensively in china and the research of loesses in china has been ongoing since 1954 33 much effort was put into setting up regional and local loess stratigraphies and their correlations kukla 1970 1975 1977 however even the chronostratigraphical position of the last interglacial soil correlating with marine isotope substage 5e was a matter of debate due to the lack of robust and reliable numerical dating as summarized for example by zoller et al 1994 and frechen et al 1997 for the austrian and hungarian loess stratigraphy respectivelysince the 1980s thermoluminescence tl optically stimulated luminescence osl and infrared stimulated luminescence irsl dating have been available providing the possibility for dating the time of loess dust depositions ie the time elapsed since the last exposure of the mineral grains to daylight during the past decade luminescence dating has significantly improved by new methodological improvements especially the development of single aliquot regenerative sar protocols murray wintle 2000 resulting in reliable ages or age estimates with an accuracy of up to 5 and 10 for the last glacial record more recently luminescence dating has also become a robust dating technique for penultimate and antepenultimate glacial loess eg thiel et al 2011 schmidt et al 2011 allowing for a reliable correlation of loesspalaeosol sequences for at least the last two interglacialglacial cycles throughout europe and the northern hemisphere frechen 2011 furthermore the numerical dating provides the basis for quantitative loess research applying more sophisticated methods to determine and understand highresolution proxy data including the palaeodust content of the atmosphere variations of the atmospheric circulation patterns and wind systems palaeoprecipitation and palaeotemperaturebesides luminescence dating methods the use of radiocarbon dating in loess has increased during the past decades advances in methods of analyses instrumentation and refinements to the radiocarbon calibration curve have made it possible to obtain reliable ages from loess deposits for the last 4045 ka however the use of'</li><li>'##capes structure robin thwaites brian slater 2004 the concept of pedodiversity and its application in diverse geoecological systems 1 zinck j a 1988 physiography and soils lecturenotes for soil students soil science division soil survey courses subject matter k6 itc enschede the netherlands'</li><li>'have a rich fossil record from the paleoproterozoic onwards outside of ice ages oxisols have generally been the dominant soil order in the paleopedological record this is because soil formation after which oxisols take more weathering to form than any other soil order has been almost nonexistent outside eras of extensive continental glaciation this is not only because of the soils formed by glaciation itself but also because mountain building which is the other critical factor in producing new soil has always coincided with a reduction in global temperatures and sea levels this is because the sediment formed from the eroding mountains reduces the atmospheric co2 content and also causes changes in circulation linked closely by climatologists to the development of continental ice sheets oxisols were not vegetated until the late carboniferous probably because microbial evolution was not before that point advanced enough to permit plants to obtain sufficient nutrients from soils with very low concentrations of nitrogen phosphorus calcium and potassium owing to their extreme climatic requirements gelisol fossils are confined to the few periods of extensive continental glaciation the earliest being 900 million years ago in the neoproterozoic however in these periods fossil gelisols are generally abundant notable finds coming from the carboniferous in new south wales the earliest land vegetation is found in early silurian entisols and inceptisols and with the growth of land vegetation under a protective ozone layer several new soil orders emerged the first histosols emerged in the devonian but are rare as fossils because most of their mass consists of organic materials that tend to decay quickly alfisols and ultisols emerged in the late devonian and early carboniferous and have a continuous though not rich fossil record in eras since then spodosols are known only from the carboniferous and from a few periods since that time though less acidic soils otherwise similar to spodosols are known from the mesozoic and tertiary and may constitute an extinct suborder during the mesozoic the paleopedological record tends to be poor probably because the absence of mountainbuilding and glaciation meant that most surface soils were very old and were constantly being weathered of what weatherable materials remained oxisols and orthents are the dominant groups though a few more fertile soils have been found such as the extensive andisols mentioned earlier from jurassic siberia evidence for widespread deeply weathered soils in the paleocene can be seen in abundant oxisols and ultisols in nowheavily glaciated scotland and antarctica mollisols the major agricultural soils'</li></ul> |
| 11 | <ul><li>'pumps used in vads can be divided into two main categories – pulsatile pumps which mimic the natural pulsing action of the heart and continuousflow pumps pulsatile vads use positive displacement pumps in some pulsatile pumps that use compressed air as an energy source the volume occupied by blood varies during the pumping cycle if the pump is contained inside the body then a vent tube to the outside air is required continuousflow vads are smaller and have proven to be more durable than pulsatile vads they normally use either a centrifugal pump or an axial flow pump both types have a central rotor containing permanent magnets controlled electric currents running through coils contained in the pump housing apply forces to the magnets which in turn cause the rotors to spin in the centrifugal pumps the rotors are shaped to accelerate the blood circumferentially and thereby cause it to move toward the outer rim of the pump whereas in the axial flow pumps the rotors are more or less cylindrical with blades that are helical causing the blood to be accelerated in the direction of the rotors axisan important issue with continuous flow pumps is the method used to suspend the rotor early versions used solid bearings however newer pumps some of which are approved for use in the eu use either magnetic levitation maglev or hydrodynamic suspension the first left ventricular assist device lvad system was created by domingo liotta at baylor college of medicine in houston in 1962 the first lvad was implanted in 1963 by liotta and e stanley crawford the first successful implantation of an lvad was completed in 1966 by liotta along with dr michael e debakey the patient was a 37yearold woman and a paracorporeal external circuit was able to provide mechanical support for 10 days after the surgery the first successful longterm implantation of an lvad was conducted in 1988 by dr william f bernhard of boston childrens hospital medical center and thermedics inc of woburn ma under a national institutes of health nih research contract which developed heartmate an electronically controlled assist device this was funded by a threeyear 62 million contract to thermedics and childrens hospital boston ma from the national heart lung and blood institute a program of the nih the early vads emulated the heart by using a pulsatile action where blood is alternately sucked into the pump from the left ventricle then forced out into the aorta devices of this kind include the heartmate ip lvas which'</li><li>'10 ml per 100 g per minute in brain tissue a biochemical cascade known as the ischemic cascade is triggered when the tissue becomes ischemic potentially resulting in damage to and the death of brain cells medical professionals must take steps to maintain proper cbf in patients who have conditions like shock stroke cerebral edema and traumatic brain injury cerebral blood flow is determined by a number of factors such as viscosity of blood how dilated blood vessels are and the net pressure of the flow of blood into the brain known as cerebral perfusion pressure which is determined by the bodys blood pressure cerebral perfusion pressure cpp is defined as the mean arterial pressure map minus the intracranial pressure icp in normal individuals it should be above 50 mm hg intracranial pressure should not be above 15 mm hg icp of 20 mm hg is considered as intracranial hypertension cerebral blood vessels are able to change the flow of blood through them by altering their diameters in a process called cerebral autoregulation they constrict when systemic blood pressure is raised and dilate when it is lowered arterioles also constrict and dilate in response to different chemical concentrations for example they dilate in response to higher levels of carbon dioxide in the blood and constrict in response to lower levels of carbon dioxidefor example assuming a person with an arterial partial pressure of carbon dioxide paco2 of 40 mmhg normal range of 38 – 42 mmhg and a cbf of 50 ml per 100g per min if the paco2 dips to 30 mmhg this represents a 10 mmhg decrease from the initial value of paco2 consequently the cbf decreases by 1ml per 100g per min for each 1mmhg decrease in paco2 resulting in a new cbf of 40ml per 100g of brain tissue per minute in fact for each 1 mmhg increase or decrease in paco2 between the range of 20 – 60 mmhg there is a corresponding cbf change in the same direction of approximately 1 – 2 ml100gmin or 2 – 5 of the cbf value this is why small alterations in respiration pattern can cause significant changes in global cbf specially through paco2 variationscbf is equal to the cerebral perfusion pressure cpp divided by the cerebrovascular resistance cvr cbf cpp cvrcontrol of cbf is considered in terms of the factors affecting cpp and the factors affecting cvr cvr is controlled by four major mechanisms metabolic control or metabolic autore'</li><li>'signals from in further detail the heart receives its neural input through parasympathetic and sympathetic ganglia and lateral grey column of the spinal cord the neurocardiac axis is the link to many problems regarding the physiological functions of the body this includes cardiac ischemia stroke epilepsy and most importantly heart arrhythmias and cardiac myopathies many of these problems are due to the imbalance of the nervous system resulting in symptoms that affect both the heart and the brainthe connection between the cardiovascular and nervous system has brought up a concern in the training processes for medical students neurocardiology is the understanding that the body is interconnected and weave in and out of other systems when training within one specialty the doctors are more likely to associate patients symptoms to their field without taking the integration into account the doctor can consequently delay a correct diagnosis and treatment for the patient however by specializing in a field advancement in medicine continues as new findings come into perspective cardiovascular systems are regulated by the autonomic nervous systems which includes the sympathetic and parasympathetic nervous systems a distinct balance between these two systems is crucial for the pathophysiology of cardiovascular disease chronic stress has been widely studied on its effects of the body resulting in an elevated heart rate hr reduced hr variability elevated sympathetic tone and intensified cardiovascular activity consequently stress promotes an autonomic imbalance in favor of the sympathetic nervous system the activation of the sympathetic nervous system contributes to endothelial dysfunction hypertension atherosclerosis insulin resistance and increased incidence of arrhythmias an imbalance in the autonomic nervous system has been documented in mood disorders it is commonly regarded as a mediator between mood disorders and cardiovascular disordersthe hypothalamus is the part of the brain that regulates function and responds to stress when the brain perceives environmental danger the amygdala fires a nerve impulse to the hypothalamus to initiate the bodys fightorflight mode through the sympathetic nervous system the stress response starts with the hypothalamus stimulating the pituitary gland which releases the adrenocorticotropic hormone this signals the release of cortisol the stress hormone initiating a multitude of physical effects on the body to aid in survival the negative feedback loop is then needed to return the body to its resting state by signaling the parasympathetic nervous systemprolonged stress leads to many hazards within the nervous system various hormones and glands become overworked chemical waste is produced resulting in degeneration of nerve cells the result of prolonged stress is the breakdown'</li></ul> |
| 40 | <ul><li>'space and comes with a natural topology for a topological space x displaystyle x and a finite set s displaystyle s the configuration space of x with particles labeled by s is conf s x f [UNK] f s [UNK] x is injective displaystyle operatorname conf sxfmid fcolon shookrightarrow xtext is injective for n ∈ n displaystyle nin mathbb n define n 1 2 … n displaystyle mathbf n 12ldots n then the nth configuration space of x is conf n x displaystyle operatorname conf mathbf n x and is denoted simply conf n x displaystyle operatorname conf nx the space of ordered configuration of two points in r 2 displaystyle mathbf r 2 is homeomorphic to the product of the euclidean 3space with a circle ie conf 2 r 2 [UNK] r 3 × s 1 displaystyle operatorname conf 2mathbf r 2cong mathbf r 3times s1 more generally the configuration space of two points in r n displaystyle mathbf r n is homotopy equivalent to the sphere s n − 1 displaystyle sn1 the configuration space of n displaystyle n points in r 2 displaystyle mathbf r 2 is the classifying space of the n displaystyle n th braid group see below the nstrand braid group on a connected topological space x is b n x π 1 uconf n x displaystyle bnxpi 1operatorname uconf nx the fundamental group of the nth unordered configuration space of x the nstrand pure braid group on x is p n x π 1 conf n x displaystyle pnxpi 1operatorname conf nx the first studied braid groups were the artin braid groups b n [UNK] π 1 uconf n r 2 displaystyle bncong pi 1operatorname uconf nmathbf r 2 while the above definition is not the one that emil artin gave adolf hurwitz implicitly defined the artin braid groups as fundamental groups of configuration spaces of the complex plane considerably before artins definition in 1891it follows from this definition and the fact that conf n r 2 displaystyle operatorname conf nmathbf r 2 and uconf n r 2 displaystyle operatorname uconf nmathbf r 2 are eilenberg – maclane spaces of type k π 1 displaystyle kpi 1 that the unordered configuration space of the plane uconf n r 2'</li><li>'##s to denote the set of limit points of s displaystyle s then we have the following characterization of the closure of s displaystyle s the closure of s displaystyle s is equal to the union of s displaystyle s and l s displaystyle ls this fact is sometimes taken as the definition of closure a corollary of this result gives us a characterisation of closed sets a set s displaystyle s is closed if and only if it contains all of its limit points no isolated point is a limit point of any set a space x displaystyle x is discrete if and only if no subset of x displaystyle x has a limit point if a space x displaystyle x has the trivial topology and s displaystyle s is a subset of x displaystyle x with more than one element then all elements of x displaystyle x are limit points of s displaystyle s if s displaystyle s is a singleton then every point of x [UNK] s displaystyle xsetminus s is a limit point of s displaystyle s adherent point – point that belongs to the closure of some given subset of a topological space condensation point – a stronger analog of limit pointpages displaying wikidata descriptions as a fallback convergent filter – use of filters to describe and characterize all basic topological notions and resultspages displaying short descriptions of redirect targets derived set mathematics – set of all limit points of a setpages displaying wikidata descriptions as a fallback filters in topology – use of filters to describe and characterize all basic topological notions and results isolated point – point of a subset s around which there are no other points of s limit of a function – point to which functions converge in analysis limit of a sequence – value to which tends an infinite sequence subsequential limit – the limit of some subsequence'</li><li>'topology optimization to is a mathematical method that optimizes material layout within a given design space for a given set of loads boundary conditions and constraints with the goal of maximizing the performance of the system topology optimization is different from shape optimization and sizing optimization in the sense that the design can attain any shape within the design space instead of dealing with predefined configurations the conventional topology optimization formulation uses a finite element method fem to evaluate the design performance the design is optimized using either gradientbased mathematical programming techniques such as the optimality criteria algorithm and the method of moving asymptotes or non gradientbased algorithms such as genetic algorithms topology optimization has a wide range of applications in aerospace mechanical biochemical and civil engineering currently engineers mostly use topology optimization at the concept level of a design process due to the free forms that naturally occur the result is often difficult to manufacture for that reason the result emerging from topology optimization is often finetuned for manufacturability adding constraints to the formulation in order to increase the manufacturability is an active field of research in some cases results from topology optimization can be directly manufactured using additive manufacturing topology optimization is thus a key part of design for additive manufacturing a topology optimization problem can be written in the general form of an optimization problem as minimize ρ f f u ρ ρ [UNK] ω f u ρ ρ d v s u b j e c t t o g 0 ρ [UNK] ω ρ d v − v 0 ≤ 0 g j u ρ ρ ≤ 0 with j 1 m displaystyle beginalignedunderset rho operatorname minimize ffmathbf urho rho int omega fmathbf urho rho mathrm d voperatorname subjectto g0rho int omega rho mathrm d vv0leq 0gjmathbf u rho rho leq 0text with j1mendaligned the problem statement includes the following an objective function f u ρ ρ displaystyle fmathbf urho rho this function represents the quantity that is being minimized for best performance the most common objective function is compliance where minimizing compliance leads to maximizing the stiffness of a structure the material distribution as a problem variable this is described by the density of the material at each location ρ x displaystyle rho mathbf x material is either present indicated by a 1 or absent indicated by a 0 u u ρ displaystyle mathbf u mathbf u mathbf rho is a state field that satisfies a linear or nonlinear state equation depending on'</li></ul> |
| 13 | <ul><li>'artrage is a bitmap graphics editor for digital painting created by ambient design ltd it is currently in version 6 and supports windows macos and mobile apple and android devices and is available in multiple languages it caters to all ages and skill levels from children to professional artists artrage 5 was announced for january 2017 and finally released in february 2017it is designed to be used with a tablet pc or graphics tablet but it can be used with a regular mouse as well its mediums include tools such as oil paint spray paint pencil acrylic and others using relatively realistic physics to simulate actual painting other tools include tracing smearing blurring mixing symmetry different types of paper for the canvas ie crumpled paper smooth paper wrinkled tin foil etc as well as special effects custom brushes and basic digital editing tools artrage is designed to be as realistic as possible this includes varying thickness and textures of media and canvas the ability to mix media and a realistic colour blending option as well as the standard digital rgb blending it includes a wide array of real life tools as well as stencils scrap layers to use as scrap paper or mixing palettes and the option to integrate reference or tracing images the later versions studio studio pro and artrage 4 include more standard digital tools such as select transform cloner symmetry fill and custom brushes sticker each tool is highly customisable and comes with several presets it is possible to share custom resources between users and there is a reasonably active artrage community that creates and shares presets canvases custom brushes stencils colour palettes and other resources real colour blending artrage offers a realistic colour blending option as well as standard digital rgb based blending it is turned off by default as it is memory intensive but can be turned on from the tools menu the most noticeable effect is that green is produced when yellow and blue are mixedthe color picker supports hsl and rgb colors one of the less well known features of artrage is the custom resource options users can create their own versions of various resources and tools or record scripts and share them with other users users can save their resource collections as a package file arpack which acts similar to a zip file it allows folders of resources to be shared and automatically installed artrage can import some photoshop filters but not all it only supports ttf truetype fonts which it reads from the computers fonts folder package files do not work with versions earlier than 35 artrage studio does not support photoshop filters or allow sticker creation and has fewer options overall alternatively individual resources can be shared directly most of the resources have'</li><li>'##im ecole du louvre paris 2003 proceedings pp 2 – 15 expanded concept of documentation jones caitlin does hardware dictate meaning three variable media conservation case studies horizon article jones caitlin seeing double emulation in theory and practice the erl king case study case study jones caitlin understanding medium preserving content and context in variable media art article from keep moving images christiane paul challenges for a ubiquitous museum presenting and preserving new media quaranta domenico interview with jon ippolito published in noemalab leaping into the abyss and resurfacing with a pearl'</li><li>'lithuanian plaque located on the lithuanian academy of sciences honoring nazi war criminal jonas noreika in 2020 cryptokitties developer dapper labs released the nba topshot project which allowed the purchase of nfts linked to basketball highlights the project was built on top of the flow blockchain in march 2021 an nft of twitter founder jack dorseys firstever tweet sold for 29 million the same nft was listed for sale in 2022 at 48 million but only achieved a top bid of 280 on december 15 2022 donald trump former president of the united states announced a line of nfts featuring images of himself for 99 each it was reported that he made between 100001 and 1 million from the scheme nfts have been proposed for purposes related to scientific and medical purposes suggestions include turning patient data into nfts tracking supply chains and minting patents as nftsthe monetary aspect of the sale of nfts has been used by academic institutions to finance research projects the university of california berkeley announced in may 2021 its intention to auction nfts of two patents of inventions for which the creators had received a nobel prize the patents for crispr gene editing and cancer immunotherapy the university would however retain ownership of the patents 85 of funds gathered through the sale of the collection were to be used to finance research the collection included handwritten notices and faxes by james allison and was named the fourth pillar it sold in june 2022 for 22 ether about us54000 at the time george church a us geneticist announced his intention to sell his dna via nfts and use the profits to finance research conducted by nebula genomics in june 2022 20 nfts with his likeness were published instead of the originally planned nfts of his dna due to the market conditions at the time despite mixed reactions the project is considered to be part of an effort to use the genetic data of 15000 individuals to support genetic research by using nfts the project wants to ensure that the users submitting their genetic data are able to receive direct payment for their contributions several other companies have been involved in similar and often criticized efforts to use blockchainbased genetic data in order to guarantee users more control over their data and enable them to receive direct financial compensation whenever their data is being sold molecule protocol a project based in switzerland is trying to use nfts to digitize the intellectual copyright of individual scientists and research teams to finance research the projects whitepaper explains the aim is to represent the copyright of scientific papers as nfts and enable their trade'</li></ul> |
| 28 | <ul><li>'##tyle mathbb n other generalizations are discussed in the article on numbers there are two standard methods for formally defining natural numbers the first one named for giuseppe peano consists of an autonomous axiomatic theory called peano arithmetic based on few axioms called peano axioms the second definition is based on set theory it defines the natural numbers as specific sets more precisely each natural number n is defined as an explicitly defined set whose elements allow counting the elements of other sets in the sense that the sentence a set s has n elements means that there exists a one to one correspondence between the two sets n and s the sets used to define natural numbers satisfy peano axioms it follows that every theorem that can be stated and proved in peano arithmetic can also be proved in set theory however the two definitions are not equivalent as there are theorems that can be stated in terms of peano arithmetic and proved in set theory which are not provable inside peano arithmetic a probable example is fermats last theorem the definition of the integers as sets satisfying peano axioms provide a model of peano arithmetic inside set theory an important consequence is that if set theory is consistent as it is usually guessed then peano arithmetic is consistent in other words if a contradiction could be proved in peano arithmetic then set theory would be contradictory and every theorem of set theory would be both true and wrong the five peano axioms are the following 0 is a natural number every natural number has a successor which is also a natural number 0 is not the successor of any natural number if the successor of x displaystyle x equals the successor of y displaystyle y then x displaystyle x equals y displaystyle y the axiom of induction if a statement is true of 0 and if the truth of that statement for a number implies its truth for the successor of that number then the statement is true for every natural numberthese are not the original axioms published by peano but are named in his honor some forms of the peano axioms have 1 in place of 0 in ordinary arithmetic the successor of x displaystyle x is x 1 displaystyle x1 intuitively the natural number n is the common property of all sets that have n elements so it seems natural to define n as an equivalence class under the relation can be made in one to one correspondence unfortunately this does not work in set theory as such an equivalence class would not be a set because of russells paradox the standard solution is to define a particular set with n elements that will be called the natural number n the following definition was first published by'</li><li>'##rac sqrt 514 and cos 2 π 5 5 − 1 4 displaystyle cos tfrac 2pi 5tfrac sqrt 514 unlike the euler product and the divisor sum formula this one does not require knowing the factors of n however it does involve the calculation of the greatest common divisor of n and every positive integer less than n which suffices to provide the factorization anyway the property established by gauss that [UNK] d [UNK] n φ d n displaystyle sum dmid nvarphi dn where the sum is over all positive divisors d of n can be proven in several ways see arithmetical function for notational conventions one proof is to note that φd is also equal to the number of possible generators of the cyclic group cd specifically if cd ⟨ g ⟩ with gd 1 then gk is a generator for every k coprime to d since every element of cn generates a cyclic subgroup and all subgroups cd ⊆ cn are generated by precisely φd elements of cn the formula follows equivalently the formula can be derived by the same argument applied to the multiplicative group of the nth roots of unity and the primitive dth roots of unity the formula can also be derived from elementary arithmetic for example let n 20 and consider the positive fractions up to 1 with denominator 20 1 20 2 20 3 20 4 20 5 20 6 20 7 20 8 20 9 20 10 20 11 20 12 20 13 20 14 20 15 20 16 20 17 20 18 20 19 20 20 20 displaystyle tfrac 120tfrac 220tfrac 320tfrac 420tfrac 520tfrac 620tfrac 720tfrac 820tfrac 920tfrac 1020tfrac 1120tfrac 1220tfrac 1320tfrac 1420tfrac 1520tfrac 1620tfrac 1720tfrac 1820tfrac 1920tfrac 2020 put them into lowest terms 1 20 1 10 3 20 1 5 1 4 3 10 7 20 2 5 9 20 1 2 11 20 3 5 13 20 7 10 3 4 4 5 17 20 9 10 19 20 1 1 displaystyle tfrac 120tfrac 110tfrac 320tfrac 15tfrac 14tfrac 310tfrac 720tfrac 25tfrac 920tfrac 12tfrac 1120tfrac 35tfrac 1320tfrac 710tfrac 34tfrac 45tfrac 1720tfrac 910tfrac 1920tfrac 11 these twenty fractions are all the positive kd ≤ 1 whose denominators are the'</li><li>'n d if j 1 displaystyle beginalignedwidetilde operatorname ds jfnunderbrace leftfpm ast fast cdots ast fright jtext timesnoperatorname ds jfnbiggl beginarrayllfpm ntext if j1sum limits stackrel dmid nd1fdoperatorname ds j1fndtext if j1endarrayendaligned the function d f n displaystyle dfn by the equivalent pair of summation formulas in the next equation is closely related to the dirichlet inverse for an arbitrary function f d f n [UNK] j 1 n ds 2 j f n [UNK] m 1 [UNK] n 2 [UNK] [UNK] i 0 2 m − 1 2 m − 1 i − 1 i 1 ds i 1 f n displaystyle dfnsum j1noperatorname ds 2jfnsum m1leftlfloor frac n2rightrfloor sum i02m1binom 2m1i1i1widetilde operatorname ds i1fn in particular we can prove that f − 1 n d ε f 1 n displaystyle f1nleftdfrac varepsilon f1rightn a table of the values of d f n displaystyle dfn for 2 ≤ n ≤ 16 displaystyle 2leq nleq 16 appears below this table makes precise the intended meaning and interpretation of this function as the signed sum of all possible multiple kconvolutions of the function f with itself let p k n p n − k displaystyle pknpnk where p is the partition function number theory then there is another expression for the dirichlet inverse given in terms of the functions above and the coefficients of the qpochhammer symbol for n 1 displaystyle n1 given by f − 1 n [UNK] k 1 n p k ∗ μ n p k ∗ d f ∗ μ n × q k − 1 q q ∞ 1 − q displaystyle f1nsum k1nleftpkast mu npkast dfast mu nrighttimes qk1frac qqinfty 1q summation bell series list of mathematical series'</li></ul> |
| 19 | <ul><li>'hepatoblastoma is a malignant liver cancer occurring in infants and children and composed of tissue resembling fetal liver cells mature liver cells or bile duct cells they usually present with an abdominal mass the disease is most commonly diagnosed during a childs first three years of life alphafetoprotein afp levels are commonly elevated but when afp is not elevated at diagnosis the prognosis is poor patients are usually asymptomatic at diagnosis as a result disease is often advanced at diagnosis hepatoblastomas originate from immature liver precursor cells are typically unifocal affect the right lobe of the liver more often than the left lobe and can metastasize they are categorized into two types epithelial type and mixed epithelial mesenchymal typeindividuals with familial adenomatous polyposis fap a syndrome of earlyonset colonic polyps and adenocarcinoma frequently develop hepatoblastomas also betacatenin mutations have been shown to be common in sporadic hepatoblastomas occurring in as many as 67 of patientsrecently other components of the wnt signaling pathway have also demonstrated a likely role in constitutive activation of this pathway in the causation of hepatoblastoma accumulating evidence suggests that hepatoblastoma is derived from a pluripotent stem cellsyndromes with an increased incidence of hepatoblastoma include beckwith – wiedemann syndrome trisomy 18 trisomy 21 acardi syndrome li – fraumeni syndrome goldenhar syndrome von gierke disease and familial adenomatous polyposis the most common method of testing for hepatoblastoma is a blood test checking the alphafetoprotein level alphafetoprotein afp is used as a biomarker to help determine the presence of liver cancer in children at birth infants have relatively high levels of afp which fall to normal adult levels by the second year of life the normal level for afp in children has been reported as lower than 50 nanograms per milliliter ngml and 10 ngml in adults an afp level greater than 500 ngml is a significant indicator of hepatoblastoma afp is also used as an indicator of treatment success if treatments are successful in removing the cancer the afp level is expected to return to normal surgical removal of the tumor neoadjuvant chemotherapy prior to tumor removal and liver'</li><li>'##phorylaseb kinase deficiency gsd type xi gsd 11 fanconibickel syndrome glut2 deficiency hepatorenal glycogenosis with renal fanconi syndrome no longer considered a glycogen storage disease but a defect of glucose transport the designation of gsd type xi gsd 11 has been repurposed for muscle lactate dehydrogenase deficiency ldha gsd type xiv gsd 14 no longer classed as a gsd but as a congenital disorder of glycosylation type 1t cdg1t affects the phosphoglucomutase enzyme gene pgm1 phosphoglucomutase 1 deficiency is both a glycogenosis and a congenital disorder of glycosylation individuals with the disease have both a glycolytic block as muscle glycogen cannot be broken down as well as abnormal serum transferrin loss of complete nglycans as it affects glycogenolysis it has been suggested that it should redesignated as gsdxiv lafora disease is considered a complex neurodegenerative disease and also a glycogen metabolism disorder polyglucosan storage myopathies are associated with defective glycogen metabolism not mcardle disease same gene but different symptoms myophosphorylasea activity impaired autosomal dominant mutation on pygm gene ampindependent myophosphorylase activity impaired whereas the ampdependent activity was preserved no exercise intolerance adultonset muscle weakness accumulation of the intermediate filament desmin in the myofibers of the patients myophosphorylase comes in two forms form a is phosphorylated by phosporylase kinase form b is not phosphorylated both forms have two conformational states active r or relaxed and inactive t or tense when either form a or b are in the active state then the enzyme converts glycogen into glucose1phosphate myophosphorylaseb is allosterically activated by amp being in larger concentration than atp andor glucose6phosphate see glycogen phosphorylase § regulation unknown glycogenosis related to dystrophy gene deletion patient has a previously undescribed myopathy associated with both becker muscular dystrophy and a glycogen storage disorder of unknown aetiology methods to diagnose glycogen storage diseases include'</li><li>'bilirubin level 01 – 12 mgdl – total serum bilirubin level urine bilirubin may also be clinically significant bilirubin is not normally detectable in the urine of healthy people if the blood level of conjugated bilirubin becomes elevated eg due to liver disease excess conjugated bilirubin is excreted in the urine indicating a pathological process unconjugated bilirubin is not watersoluble and so is not excreted in the urine testing urine for both bilirubin and urobilinogen can help differentiate obstructive liver disease from other causes of jaundiceas with billirubin under normal circumstances only a very small amount of urobilinogen is excreted in the urine if the livers function is impaired or when biliary drainage is blocked some of the conjugated bilirubin leaks out of the hepatocytes and appears in the urine turning it dark amber however in disorders involving hemolytic anemia an increased number of red blood cells are broken down causing an increase in the amount of unconjugated bilirubin in the blood because the unconjugated bilirubin is not watersoluble one will not see an increase in bilirubin in the urine because there is no problem with the liver or bile systems this excess unconjugated bilirubin will go through all of the normal processing mechanisms that occur eg conjugation excretion in bile metabolism to urobilinogen reabsorption and will show up as an increase of urobilinogen in the urine this difference between increased urine bilirubin and increased urine urobilinogen helps to distinguish between various disorders in those systems in ancient history hippocrates discussed bile pigments in two of the four humours in the context of a relationship between yellow and black biles hippocrates visited democritus in abdera who was regarded as the expert in melancholy black bilerelevant documentation emerged in 1827 when m louis jacques thenard examined the biliary tract of an elephant that had died at a paris zoo he observed dilated bile ducts were full of yellow magma which he isolated and found to be insoluble in water treating the yellow pigment with hydrochloric acid produced a strong green color thenard suspected the green pigment was caused by impurities derived from mucus of bileleopold gmelin'</li></ul> |
| 14 | <ul><li>'by wnt signaling in the blastula chordin and nogginexpressing bcne center sia and xtwn can function as homo or heterodimers to bind a conserved p3 site within the proximal element pe of the goosecoid gsc promoter wnt signaling also acts with mvegt to upregulate xnr5 secreted from the nieuwkoop center in the interior dorsovegetal region which will then induce additional transcription factors such as xnr1 xnr2 gsc chordin chd the final cue is mediated by nodalactivin signaling inducing transcription factors that in combination with sia will induce the cerberus cer genethe organizer has both transcription and secreted factors transcription factors include goosecoid lim1 and xnot which are all homeodomain proteins goosecoid was the first organizer gene discovered providing “ the first visualization of spemannmangold organizer cells and of their dynamic changes during gastrulation ” while it was the first to be studied it is not the first gene to be activated following transcriptional activation by sia and xtwn gsc is expressed in a subset of cells encompassing 60° of arc on the dorsal marginal zone expression of gsc activates the expression of secreted signaling molecules ventral injection of gsc leads to a phenotype as seen in spemann and mangolds original experiment a twinned axissecreted factors from the organizer form gradients in the embryo to differentiate the tissues after the discovery of the sepmannmangold organizer many labs rushed to be the first to discover the inducing factors responsible for this organization this created a large international impact with labs in japan russia and germany changing the way they viewed and studied developmental organization however due to the slow progress in the field many labs move research interests away from the organizer but not before the impact of the discovery was made 60 years after the discovery of the organizer many nobel prizes were given to developmental biologists for work that was influenced by the organizer until the mid 19th century japan was a closed society that did not participate in advances in modern biology until later in that century at that time many students who went abroad to study in american and european labs came back with new ideas about approaches to developmental sciences when the returning students would try to incorporate their new ideas into the japanese experimental embryology they were rejected by the members of japanese biological society after the publication of the spemannmangold organizer many more students went to study abroad in european labs to learn much more about this organizer and returned to use'</li><li>'##ietal cell foveolar cell intestine enteroendocrine cell gastric inhibitory polypeptide s cell delta cell cholecystokinin enterochromaffin cell goblet cell paneth cell tuft cell enterocyte microfold cell liver hepatocyte hepatic stellate cell gallbladder cholecystocyte exocrine component of pancreas centroacinar cell pancreatic stellate cell islets of langerhans alpha cell beta cell delta cell pp cell f cell gamma cell epsilon cell thyroid gland follicular cell parafollicular cell parathyroid gland parathyroid chief cell oxyphil cell urothelial cell germ layer list of distinct cell types in the adult human body'</li><li>'##ing proliferation aligning cells in direction of flow and regulating many cell signalling factors mechanotransduction may act either by positive or negative feedback loops which may activate or repress certain genes to respond to the physical stress or strain placed on the vessel the cell reads flow patterns through integrin sensing receptors which provide a mechanical link between the extracellular matrix and the actin cytoskeleton this mechanism dictates how a cell will respond to flow patterns and can mediate cell adhesion which is especially relevant to the sprouting of new vessels through the process of mechanotransduction shear stress can regulate the expression of many different genes the following examples have been studied in the context of vascular remodelling by biomechanics endothelial nitric oxide synthase enos promotes unidirectional flow at the onset of heart beats and is upregulated by shear stress plateletderived growth factor pdgf transforming growth factor beta tgfβ and kruppellike factor 2 klf2 are induced by shear stress and may have upregulating effects on genes which deal with endothelial response to turbulent flow shear stress induces phosphorylation of vegf receptors which are responsible for vascular development especially the sprouting of new vessels hypoxia can trigger the expression of hypoxia inducible factor 1 hif1 or vegf in order to pioneer the growth of new sprouts into oxygendeprived areas of the embryo pdgfβ vegfr2 and connexion43 are upregulated by abnormal flow patterns shear stress upregulates nfκb which induces matrix metalloproteinases to trigger the enlargement of blood vesselsdifferent flow patterns and their duration can elicit very different responses based on the shearstressregulated genes both genetic regulation and physical forces are responsible for the process of embryonic vascular remodelling yet these factors are rarely studied in tandem the main difficulty in the in vivo study of embryonic vascular remodelling has been to separate the effects of physical cues from the delivery of nutrients oxygen and other signalling factors which may have an effect on vascular remodelling previous work has involved control of blood viscosity in early cardiovascular flow such as preventing the entry of red blood cells into blood plasma thereby lowering viscosity and associated shear stresses starch can also be injected into the blood stream in order to increase viscosity and shear stress studies'</li></ul> |
| 18 | <ul><li>'##ised lines or patterns blind stamps and often small metal pieces of furniture medieval stamps showed animals and figures as well as the vegetal and geometric designs that would later dominate book cover decoration until the end of the period books were not usually stood up on shelves in the modern way the most functional books were bound in plain white vellum over boards and had a brief title handwritten on the spine techniques for fixing gold leaf under the tooling and stamps were imported from the islamic world in the 15th century and thereafter the goldtooled leather binding has remained the conventional choice for high quality bindings for collectors though cheaper bindings that only used gold for the title on the spine or not at all were always more common although the arrival of the printed book vastly increased the number of books produced in europe it did not in itself change the various styles of binding used except that vellum became much less used although early coarse hempen paper had existed in china during the western han period 202 bc – 9 ad the easternhan chinese court eunuch cai lun c 50 – 121 ad introduced the first significant improvement and standardization of papermaking by adding essential new materials into its composition bookbinding in medieval china replaced traditional chinese writing supports such as bamboo and wooden slips as well as silk and paper scrolls the evolution of the codex in china began with foldedleaf pamphlets in the 9th century ad during the late tang dynasty 618 – 907 improved by the butterfly bindings of the song dynasty 960 – 1279 the wrapped back binding of the yuan dynasty 1271 – 1368 the stitched binding of the ming 1368 – 1644 and qing dynasties 1644 – 1912 and finally the adoption of westernstyle bookbinding in the 20th century coupled with the european printing press that replaced traditional chinese printing methods the initial phase of this evolution the accordionfolded palmleafstyle book most likely came from india and was introduced to china via buddhist missionaries and scriptureswith the arrival from the east of rag paper manufacturing in europe in the late middle ages and the use of the printing press beginning in the mid15th century bookbinding began to standardize somewhat but page sizes still varied considerably paper leaves also meant that heavy wooden boards and metal furniture were no longer necessary to keep books closed allowing for much lighter pasteboard covers the practice of rounding and backing the spines of books to create a solid smooth surface and shoulders supporting the textblock against its covers facilitated the upright storage of books and titling on spine this became common practice by the close of the 16th century but was consistently practiced in rome as early as the 1520s'</li><li>'##xtapose their product with another image listed as 123 after juxtaposition the complexity is increased with fusion which is when an advertisers product is combined with another image listed as 456 the most complex is replacement which replaces the product with another product listed as 789 each of these sections also include a variety of richness the least rich would be connection which shows how one product is associated with another product listed as 147 the next rich would be similarity which shows how a product is like another product or image listed as 258 finally the most rich would be opposition which is when advertisers show how their product is not like another product or image listed as 369 advertisers can put their product next to another image in order to have the consumer associate their product with the presented image advertisers can put their product next to another image to show the similarity between their product and the presented image advertisers can put their product next to another image in order to show the consumer that their product is nothing like what the image shows advertisers can combine their product with an image in order to have the consumer associate their product with the presented image advertisers can combine their product with an image to show the similarity between their product and the presented image advertisers can combine their product with another image in order to show the consumer that their product is nothing like what the image shows advertisers can replace their product with an image to have the consumer associate their product with the presented image advertisers can replace their product with an image to show the similarity between their product and the presented image advertisers can replace their product with another image to show the consumer that their product is nothing like what the image showseach of these categories varies in complexity where putting a product next to a chosen image is the simplest and replacing the product entirely is the most complex the reason why putting a product next to a chosen image is the most simple is because the consumer has already been shown that there is a connection between the two in other words the consumer just has to figure out why there is the connection however when advertisers replace the product that they are selling with another image then the consumer must first figure out the connection and figure out why the connection was made visual tropes and tropic thinking are a part of visual rhetoric while the field of visual rhetoric isnt necessarily concerned with the aesthetic choices of a piece the same principles of visual composition may be applied to the study and practice of visual art for example'</li><li>'used to color cloth for a very long time the technique probably reached its peak of sophistication in katazome and other techniques used on silks for clothes during the edo period in japan in europe from about 1450 they were commonly used to color old master prints printed in black and white usually woodcuts this was especially the case with playingcards which continued to be colored by stencil long after most other subjects for prints were left in black and white stencils were used for mass publications as the type did not have to be handwritten stencils were popular as a method of book illustration and for that purpose the technique was at its height of popularity in france during the 1920s when andre marty jean saude and many other studios in paris specialized in the technique low wages contributed to the popularity of the highly laborintensive process when stencils are used in this way they are often called pochoir in the pochoir process a print with the outlines of the design was produced and a series of stencils were used through which areas of color were applied by hand to the page to produce detail a collotype could be produced which the colors were then stenciled over pochoir was frequently used to create prints of intense color and is most often associated with art nouveau and art deco design aerosol stencils have many practical applications and the stencil concept is used frequently in industrial commercial artistic residential and recreational settings as well as by the military government and infrastructure management a template is used to create an outline of the image stencils templates can be made from any material which will hold its form ranging from plain paper cardboard plastic sheets metals and wood stencils are frequently used by official organizations including the military utility companies and governments to quickly and clearly label objects vehicles and locations stencils for an official application can be customized or purchased as individual letters numbers and symbols this allows the user to arrange words phrases and other labels from one set of templates unique to the item being labeled when objects are labeled using a single template alphabet it makes it easier to identify their affiliation or source stencils have also become popular for graffiti since stencil art using spraypaint can be produced quickly and easily these qualities are important for graffiti artists where graffiti is illegal or quasilegal depending on the city and stenciling surface the extensive lettering possible with stencils makes it especially attractive to political artists for example the anarchopunk band crass used stencils of antiwar anarchist feminist and anticonsumerist messages in'</li></ul> |
| 3 | <ul><li>'molecular at a basic level the analysis of size and morphology can provide some information on whether they are likely to be human or from another animal analyzed contents can include those visible to the naked eye such as seeds and other plant remains — to the microscopic including pollen and phytoliths parasites in coprolites can give information on the living conditions and health of ancient populations at the molecular level ancient dna analysis can be used both to identify the species and to provide dietary information a method using lipid analysis can also be used for species identification based on the range of fecal sterols and bile acids these molecules vary between species according to gut biochemistry and so can distinguish between humans and other animals an example of researchers using paleofeces for the gathering of information using dna analysis occurred at hinds cave in texas by hendrik poinar and his team the fecal samples obtained were over 2000 years old from the samples poinar was able to gather dna samples using the analysis methods recounted above from his research poinar found that the feces belonged to three native americans based on mtdna similarities to present day native americans poinar also found dna evidence of the food they ate there were samples of buckthorn acorns ocotillo nightshade and wild tobacco no visible remnants of these plants were visible in the fecal matter along with plant material there were also dna sequences of animal species such as bighorn sheep pronghorn antelope and cottontail rabbit this analysis of the diet was very helpful previously it was assumed that this population of native americans survived with berries being their main source of nutrients from the paleofeces it was determined that these assumptions were incorrect and in the approximately 2 days of food that are represented in a fecal sample 2 – 4 animal species and 4 – 8 plant species were represented the nutritional diversity of this archaic human population was rather extraordinaryan example of the use of lipid analysis for identification of species is at the neolithic site of catalhoyuk in turkey large midden deposits at the site are frequently found to contain fecal material either as distinct coprolites or compressed cess pit deposits this was initially thought to be from dog on the basis of digested bone however an analysis of the lipid profiles showed that many of the coprolites were actually from humansthe analysis of parasites from fecal material within cesspits has provided evidence for health and migration in past populations for example the identification of fish tapeworm eggs in acre in the crusader period indicate that this parasite was transported from northern europe the parasite'</li><li>'but may reject requirements to apply for a permit for certain gathering purposes the central difference being that one is an internal cultural evolution while the other is externally driven by the society or legal body that surrounds the culture'</li><li>'structural functionalism or simply functionalism is a framework for building theory that sees society as a complex system whose parts work together to promote solidarity and stabilitythis approach looks at society through a macrolevel orientation which is a broad focus on the social structures that shape society as a whole and believes that society has evolved like organisms this approach looks at both social structure and social functions functionalism addresses society as a whole in terms of the function of its constituent elements namely norms customs traditions and institutions a common analogy popularized by herbert spencer presents these parts of society as organs that work toward the proper functioning of the body as a whole in the most basic terms it simply emphasizes the effort to impute as rigorously as possible to each feature custom or practice its effect on the functioning of a supposedly stable cohesive system for talcott parsons structuralfunctionalism came to describe a particular stage in the methodological development of social science rather than a specific school of thought in sociology classical theories are defined by a tendency towards biological analogy and notions of social evolutionism functionalist thought from comte onwards has looked particularly towards biology as the science providing the closest and most compatible model for social science biology has been taken to provide a guide to conceptualizing the structure and function of social systems and analyzing evolution processes via mechanisms of adaptation functionalism strongly emphasises the preeminence of the social world over its individual parts ie its constituent actors human subjects while one may regard functionalism as a logical extension of the organic analogies for societies presented by political philosophers such as rousseau sociology draws firmer attention to those institutions unique to industrialized capitalist society or modernity auguste comte believed that society constitutes a separate level of reality distinct from both biological and inorganic matter explanations of social phenomena had therefore to be constructed within this level individuals being merely transient occupants of comparatively stable social roles in this view comte was followed by emile durkheim a central concern for durkheim was the question of how certain societies maintain internal stability and survive over time he proposed that such societies tend to be segmented with equivalent parts held together by shared values common symbols or as his nephew marcel mauss held systems of exchanges durkheim used the term mechanical solidarity to refer to these types of social bonds based on common sentiments and shared moral values that are strong among members of preindustrial societies in modern complex societies members perform very different tasks resulting in a strong interdependence based on the metaphor above of an organism in which many parts function together to sustain the whole durkheim argued that complex societies are held together by solidarity ie social bonds based on'</li></ul> |
| 22 | <ul><li>'1960 by harry hammond hess the ocean drilling program started in 1966 deepsea vents were discovered in 1977 by jack corliss and robert ballard in the submersible dsv alvin in the 1950s auguste piccard invented the bathyscaphe and used the bathyscaphe trieste to investigate the oceans depths the united states nuclear submarine nautilus made the first journey under the ice to the north pole in 1958 in 1962 the flip floating instrument platform a 355foot 108 m spar buoy was first deployed in 1968 tanya atwater led the first allwoman oceanographic expedition until that time gender policies restricted women oceanographers from participating in voyages to a significant extent from the 1970s there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction early techniques included analog computers such as the ishiguro storm surge computer generally now replaced by numerical methods eg slosh an oceanographic buoy array was established in the pacific to allow prediction of el nino events 1990 saw the start of the world ocean circulation experiment woce which continued until 2002 geosat seafloor mapping data became available in 1995 study of the oceans is critical to understanding shifts in earths energy balance along with related global and regional changes in climate the biosphere and biogeochemistry the atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux and solar insolation recent studies have advanced knowledge on ocean acidification ocean heat content ocean currents sea level rise the oceanic carbon cycle the water cycle arctic sea ice decline coral bleaching marine heatwaves extreme weather coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks in general understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of earths resources the intergovernmental oceanographic commission reports that 17 of the total national research expenditure of its members is focused on ocean science the study of oceanography is divided into these five branches biological oceanography investigates the ecology and biology of marine organisms in the context of the physical chemical and geological characteristics of their ocean environment chemical oceanography is the study of the chemistry of the ocean whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes ocean chemistry focuses primarily on the geochemical cycles the following is a central topic investigated by chemical oceanography ocean acidification ocean acidification describes the decrease in ocean ph that is caused by anthropogenic carbon dioxide co2 emissions into the atmosphere seawater is slightly alkaline'</li><li>'maintained by the hydrological division of the usgs for large streams for a basin with an area of 5000 square miles or more the river system is typically gauged at five to ten places the data from each gauging station apply to the part of the basin upstream that location given several decades of peak annual discharges for a river limited projections can be made to estimate the size of some large flow that has not been experienced during the period of record the technique involves projecting the curve graph line formed when peak annual discharges are plotted against their respective recurrence intervals however in most cases the curve bends strongly making it difficult to plot a projection accurately this problem can be overcome by plotting the discharge andor recurrence interval data on logarithmic graph paper once the plot is straightened a line can be ruled drawn through the points a projection can then be made by extending the line beyond the points and then reading the appropriate discharge for the recurrence interval in question runoff of water in channels is responsible for transport of sediment nutrients and pollution downstream without streamflow the water in a given watershed would not be able to naturally progress to its final destination in a lake or ocean this would disrupt the ecosystem streamflow is one important route of water from the land to lakes and oceans the other main routes are surface runoff the flow of water from the land into nearby watercourses that occurs during precipitation and as a result of irrigation flow of groundwater into surface waters and the flow of water from constructed pipes and channels streamflow confers on society both benefits and hazards runoff downstream is a means to collect water for storage in dams for power generation of water abstraction the flow of water assists transport downstream a given watercourse has a maximum streamflow rate that can be accommodated by the channel that can be calculated if the streamflow exceeds this maximum rate as happens when an excessive amount of water is present in the watercourse the channel cannot handle all the water and flooding occurs the 1993 mississippi river flood the largest ever recorded on the river was a response to a heavy long duration spring and summer rainfalls early rains saturated the soil over more than a 300000 square miles of the upper watershed greatly reducing infiltration and leaving soils with little or no storage capacity as rains continued surface depressions wetlands ponds ditches and farm fields filled with overland flow and rainwater with no remaining capacity to hold water additional rainfall was forced from the land into tributary channels and thence to the mississippi river for more than a month the total load of water from hundreds of tributaries exceeded the mississippi ’ s channel capacity causing it to spill over'</li><li>'double mass analysis is a simple graphical method to evaluate the consistency of hydrological data the dm approach plots the cumulative data of one variable against the cumulative data of a second variable a break in the slope of a linear function fit to the data is thought to represent a change in the relation between the variables this approach provides a robust method to determine a change in the behavior of precipitation and recharge in a simple graphical method it is a commonly used data analysis approach for investigating the behaviour of records made of hydrological or meteorological data at a number of locations it is used to determine whether there is a need for corrections to the data to account for changes in data collection procedures or other local conditions such changes may result from a variety of things including changes in instrumentation changes in observation procedures or changes in gauge location or surrounding conditions double mass analysis for checking consistency of a hydrological or meteorological record is considered to be an essential tool before taking it for analysis purpose this method is based on the hypothesis that each item of the recorded data of a population is consistentan example of a double mass analysis is a double mass plot or double mass curve for this points andor a joining line are plotted where the x and y coordinates are determined by the running totals of the values observed at two stations if both stations are affected to the same extent by the same trends then a double mass curve should follow a straight line a break in the slope of the curve would indicate that conditions have changed at one location but not at another breaks in the doublemass curve of such variables are caused by changes in the relation between the variables these changes may be due to changes in the method of data collection or to physical changes that affect the relation this technique is based on the principle that when each recorded data comes from the same parent population they are consistent let x i y i displaystyle xiyi be the data points then the procedure for double mass analysis is as follows divide the data into n i displaystyle ni distinct categories of equal slope s i displaystyle si obtain correction factor for category n i 1 displaystyle ni1 as c i s i s i 1 displaystyle cifrac sisi1 multiply n i 1 displaystyle ni1 category with c i displaystyle ci to get corrected data after correction repeat this process until all data points have the same slope statistics dubreuil p 1974 initiation a lanalyse hydrologique masson cie et orstom paris'</li></ul> |
| 24 | <ul><li>'sasaki is a design firm specializing in architecture interior design urban design space planning landscape architecture ecology civil engineering and place branding the firm is headquartered in boston massachusetts but practices on an international scale with offices in shanghai and denver colorado and clients and projects globally sasaki was founded in 1953 by landscape architect hideo sasaki while he served as a professor and landscape architecture chair at the harvard graduate school of design sasaki was founded upon collaborative interdisciplinary design unprecedented in design practice at the time and an emphasis on the integration of land buildings people and their contextsthrough the mid to late 1900s sasaki designed plazas including copley square corporate parks college campuses and master plans among other projectsthe firm includes a team of in house designers software developers and data analysts who support the practice today sasaki has over 300 employees across its diverse practice areas and between its two offices the firm engages in a wide variety of project types across its many disciplines in 2000 in honor of the passing of the firms founder the family of hideo sasaki together with sasaki and other financial supporters established the sasaki foundation the foundation which is a separate entity from sasaki gives yearly grants supporting communityled research at sasaki in 2012 sasaki opened an office in shanghai to support the firms work in china and the larger asia pacific regionin 2018 sasaki opened the incubator a coworking space designed by and located within the sasaki campus which houses the sasaki foundation as curator of programming the 5000 squarefoot space is home to several likeminded nonprofits organizations and individualsin 2020 sasaki established a new office in denver colorado marking the firms third physical studio location opening an office in denver a region where sasaki has been working since the 1960s positions sasaki to deliver on projects across western north america in 2007 sasaki was honored as the american society of landscape architects firm of the year in 2012 sasaki won the american planning association firm of the year awardsasaki has earned numerous consecutive pierre lenfant international planning awards from the american planning association in 2017 two of the five annual finalists for the rudy bruner award for urban excellence were sasaki projects the bruce c bolling municipal building boston ma and the chicago riverwalk both were recognized as silver medalists sasaki has been named a top 50 firm by architect magazine numerous timesthe firm has been recognized by the boston society of landscape architects bsla boston society of architects bsa american planning association apa american institute of architecture aia society for college and university planning scup urban land initiative uli dezeen and fast company among others notable sasakisp'</li><li>'to mark their termini the new fountains were expressions of the new baroque art which was officially promoted by the catholic church as a way to win popular support against the protestant reformation the council of trent had declared in the 16th century that the church should counter austere protestantism with art that was lavish animated and emotional the fountains of rome like the paintings of rubens were examples of the principles of baroque art they were crowded with allegorical figures and filled with emotion and movement in these fountains sculpture became the principal element and the water was used simply to animate and decorate the sculptures they like baroque gardens were a visual representation of confidence and powerthe first of the fountains of st peters square by carlo maderno 1614 was one of the earliest baroque fountains in rome made to complement the lavish baroque facade he designed for st peters basilica behind it it was fed by water from the paola aqueduct restored in 1612 whose source was 266 feet 81 m above sea level which meant it could shoot water twenty feet up from the fountain its form with a large circular vasque on a pedestal pouring water into a basin and an inverted vasque above it spouting water was imitated two centuries later in the fountains of the place de la concorde in paris the triton fountain in the piazza barberini 1642 by gian lorenzo bernini is a masterpiece of baroque sculpture representing triton halfman and halffish blowing his horn to calm the waters following a text by the roman poet ovid in the metamorphoses the triton fountain benefited from its location in a valley and the fact that it was fed by the aqua felice aqueduct restored in 1587 which arrived in rome at an elevation of 194 feet 59 m above sea level fasl a difference of 130 feet 40 m in elevation between the source and the fountain which meant that the water from this fountain jetted sixteen feet straight up into the air from the conch shell of the tritonthe piazza navona became a grand theater of water with three fountains built in a line on the site of the stadium of domitian the fountains at either end are by giacomo della porta the neptune fountain to the north 1572 shows the god of the sea spearing an octopus surrounded by tritons sea horses and mermaids at the southern end is il moro possibly also a figure of neptune riding a fish in a conch shell in the center is the fontana dei quattro fiumi the fountain of the four rivers 1648 – 51 a highly theatrical fountain by bernini with statues representing rivers from the four continents the nile danube'</li><li>'law the techniques of coppicing and hard pollarding can be used to rejuvenate a hedge where hedgelaying is not appropriate the term instant hedge has become known since early this century for hedging plants that are planted collectively in such a way as to form a mature hedge from the moment they are planted together with a height of at least 12 metres they are usually created from hedging elements or individual plants which means very few are actually hedges from the start as the plants need time to grow and entwine to form a real hedge an example of an instant hedge can be seen at the elveden hall estate in east anglia where fields of hedges can be seen growing in cultivated rows since 1998 the development of this type of mature hedge has led to such products being specified by landscape architects garden designers property developers insurance companies sports clubs schools and local councils as well as many private home owners demand has also increased from planning authorities in specifying to developers that mature hedges are planted rather than just whips a slender unbranched shoot or plant a real instant hedge could be defined as having a managed root growth system allowing the hedge to be sold with a continuous rootstrips rather than individual plants which then enables yearround planting during its circa 8year production time all stock should be irrigated clipped and treated with controlledrelease nutrients to optimise health a quickset hedge is a type of hedge created by planting live whitethorn common hawthorn cuttings directly into the earth hazel does not sprout from cuttings once planted these cuttings root and form new plants creating a dense barrier the technique is ancient and the term quickset hedge is first recorded in 1484 the word quick in the name refers to the fact that the cuttings are living as in the quick and the dead and not to the speed at which the hedge grows although it will establish quite rapidly an alternative meaning of quickset hedging is any hedge formed of living plants or of living plants combined with a fence the technique of quicksetting can also be used for many other shrubs and trees a devon hedge is an earth bank topped with shrubs the bank may be faced with turf or stone when stonefaced the stones are generally placed on edge often laid flat around gateways a quarter of devons hedges are thought to be over 800 years old there are approximately 33000 miles 53000 km of devon hedge which is more than any other county traditional farming throughout the county has meant that fewer devon hedges have been removed than elsewhere devon hedges are particularly important for wildlife habitat around 20 of'</li></ul> |
| 30 | <ul><li>'difficulty adjusting to this experience although adult daughters also tend to express difficulty however this may be a factor of age moreso than the relationship to the patient in that spouses tend to be older caregivers than adult children many studies have suggested that intervention may curb stress levels of caregivers there are many types of interventions available for cancer caregivers including educational problemsolving skills training and grief therapy familyfocused grief therapy has been shown to significantly improve overall distress levels and depression in those affected by cancer likewise interventions that increased patients general knowledge about their specific disease have been reported to reduce anxiety distress and help them take a more active part in the decision making process interventions by members of the healthcare system designed to teach caregivers proficiency in both the physical and psychological care of patients have been shown to benefit both partners interventions that focus on both the patient and the caregiver as a couple have proven more effective in helping adaptation to cancer than those that try to help the patient or caregiver individually largely due to the inclusion of training in supportive communication sexual counselling and partner support finally spirituality has been demonstrated to be related to quality of life for caregivers not every caregiver experiences only negative consequences from cancer caregiving for some caregivers there are personal benefits that stem from caring for their loved one and the benefits found might help to buffer the negative experiences that caregivers frequently face the concept of posttraumatic growth is of particular note when discussing the benefits of cancer caregiving and cancer in general posttraumatic growth is a positive psychological growth that occurs as a result of a traumatic incident studies have found that within the cancer caregiver population strong predictors of posttraumatic growth are less education being employed or displaying high avoidance tendencies presurgery and framing coping strategies in a positive style furthermore individuals who engage in religious coping or have high perceived social support are more likely to report posttraumatic growth other benefits of caregiving include an improved sense of selfworth increased selfsatisfaction a sense of mastery increased intimacy with their ill loved one and a sense of meaning experiencing a loved ones cancer may also cause significant lifestyle changes for caregivers for instance caregivers may become more proactive by engaging in health behaviours such as increased exercise better diets and increased screening however this finding is not conclusive some studies report that certain behaviours such as screening tend to decrease amongst caregivers'</li><li>'in oncology the fact that one round of chemotherapy does not kill all the cells in a tumor is a poorly understood phenomenon called fractional kill or fractional cell kill the fractional kill hypothesis states that a defined chemotherapy concentration applied for a defined time period will kill a constant fraction of the cells in a population independent of the absolute number of cells in solid tumors poor access of the tumor to the drug can limit the fraction of tumor cells killed but the validity of the fractional kill hypothesis has also been established in animal models of leukemia as well as in human leukemia and lymphoma where drug access is less of an issuebecause only a fraction of the cells die with each treatment repeated doses must be administered to continue to reduce the size of the tumor current chemotherapy regimens apply drug treatment in cycles with the frequency and duration of treatments limited by toxicity to the patient the goal is to reduce the tumor population to zero with successive fractional kills for example assuming a 99 kill per cycle of chemotherapy a tumor of 1011 cells would be reduced to less than one cell with six treatment cycles 1011 0016 1 however the tumor can also regrow during the intervals between treatments limiting the net reduction of each fractional kill the fractional killing of tumors in response to treatment is assumed to be due to the cell cycle specificity of chemotherapy drugs cytarabine a dnasynthesis inhibitor also known as arac is cited as the classic cell cycle phasespecific agent chemotherapy dosing schedules have been optimized based on the fact that cytarabine is only expected to be effective in the dna synthesis s phase of the cell cycle consistent with this leukemia patients respond better to cytarabine treatments given every 12 hours rather than every 24 hours this finding that can be explained by the fact that sphase in these leukemia cells lasts 18 – 20 hours allowing some cells to escape the cytotoxic effect of the drug if it is given every 24 hours however alternative explanations are possible as described below very little direct information is available on whether cells undergo apoptosis from a certain point in the cell cycle one study which did address this topic used flow cytometry or elutriation of synchronized cells treated with actinomycin d1 camptothecin or aphidicolin each of which had been documented to exert its effects in a particular phase of the cell cycle surprisingly the authors found that each of the agents was able to induce apoptosis in all phases of the cell cycle suggesting that the mechanism through which the drugs induce apoptosis may'</li><li>'a myeloma protein is an abnormal antibody immunoglobulin or more often a fragment thereof such as an immunoglobulin light chain that is produced in excess by an abnormal monoclonal proliferation of plasma cells typically in multiple myeloma or monoclonal gammopathy of undetermined significance other terms for such a protein are monoclonal protein m protein m component m spike spike protein or paraprotein this proliferation of the myeloma protein has several deleterious effects on the body including impaired immune function abnormally high blood viscosity thickness of the blood and kidney damage the concept and the term paraprotein were introduced by the berlin pathologist dr kurt apitz in 1940 then the senior physician of the pathological institute at the charite hospitalparaproteins allowed the detailed study of immunoglobulins which eventually led to the production of monoclonal antibodies in 1975 myeloma is a malignancy of plasma cells plasma cells produce immunoglobulins which are commonly called antibodies there are thousands of different antibodies each consisting of pairs of heavy and light chains antibodies are typically grouped into five classes iga igd ige igg and igm when someone has myeloma a malignant clone a rogue plasma cell reproduces in an uncontrolled fashion resulting in overproduction of the specific antibody the original cell was generated to produce each type of antibody has a different number of light chain and heavy chain pairs as a result there is a characteristic normal distribution of these antibodies in the blood by molecular weight when there is a malignant clone there is usually overproduction of a single antibody resulting in a spike on the normal distribution sharp peak on the graph which is called an m spike or monoclonal spike people will sometimes develop a condition called mgus monoclonal gammopathy of undetermined significance where there is overproduction of one antibody but the condition is benign noncancerous an explanation of the difference between multiple myeloma and mgus can be found in the international myeloma foundations patient handbook and concise reviewdetection of paraproteins in the urine or blood is most often associated with mgus where they remain silent and multiple myeloma an excess in the blood is known as paraproteinemia paraproteins form a narrow band or spike in protein electrophoresis as they are all exactly the same protein unlike normal immunoglobulin antibodies paraproteins cannot fight infection serum free lightchai'</li></ul> |
| 42 | <ul><li>'the 1800s in particular louis pasteurs work with the rabies vaccine in the late 1800s exemplifies this methodpasteur created several vaccines over the course of his lifetime his work prior to rabies involved attenuation of pathogens but not through serial passage in particular pasteur worked with cholera and found that if he cultured bacteria for long periods of time he could create an effective vaccine pasteur thought that there was something special about oxygen and this was why he was able to attenuate create a less virulent version of the bacteria pasteur also tried to apply this method to create a vaccine for anthrax although with less successnext pasteur wanted to apply this method to create a vaccine for rabies however rabies was unbeknownst to him caused by a virus not a bacterial pathogen like cholera and anthrax and for that reason rabies could not be cultured in the same way that cholera and anthrax could be methods for serial passage for viruses in vitro were not developed until the 1940s when john enders thomas huckle weller and frederick robbins developed a technique for this these three scientists subsequently won the nobel prize for their major advancementto solve this problem pasteur worked with the rabies virus in vivo in particular he took brain tissue from an infected dog and transplanted it into another dog repeating this process multiple times and thus performing serial passage in dogs these attempts increased the virulence of the virus then he realized that he could put dog tissue into a monkey to infect it and then perform serial passage in monkeys after completing this process and infecting a dog with the resulting virus pasteur realized that the virus was less virulent mostly pasteur worked with the rabies virus in rabbits ultimately to create his vaccine for rabies pasteur used a simple method that involved drying out tissue as is described in his notebook in a series of flasks in which air is maintained in a dry state … each day one suspends a thickness of fresh rabbit spinal tissue taken from a rabbit dead of rabies each day as well one inoculates under the skin of a dog 1 ml of sterilized bouillion in which has dispersed a small fragment of one of these desiccated spinal pieces beginning with a piece most distant in time from when it was worked upon in order to be sure that it is not at all virulent pasteur mostly used other techniques besides serial passage to create his vaccines however the idea of attenuating a virus through serial passage still holds one way to attenuate a virus'</li><li>'endogenous retrovirus endogenous viral element adenoassociated virus bornavirus paleovirus'</li><li>'viral load also known as viral burden is a numerical expression of the quantity of virus in a given volume of fluid including biological and environmental specimens it is not to be confused with viral titre or viral titer which depends on the assay when an assay for measuring the infective virus particle is done plaque assay focus assay viral titre often refers to the concentration of infectious viral particles which is different from the total viral particles viral load is measured using body fluids sputum and blood plasma as an example of environmental specimens the viral load of norovirus can be determined from runoff water on garden produce norovirus has not only prolonged viral shedding and has the ability to survive in the environment but a minuscule infectious dose is required to produce infection in humans less than 100 viral particlesviral load is often expressed as viral particles virions or infectious particles per ml depending on the type of assay a higher viral burden titre or viral load often correlates with the severity of an active viral infection the quantity of virus per ml can be calculated by estimating the live amount of virus in an involved fluid for example it can be given in rna copies per millilitre of blood plasma tracking viral load is used to monitor therapy during chronic viral infections and in immunocompromised patients such as those recovering from bone marrow or solid organ transplantation currently routine testing is available for hiv1 cytomegalovirus hepatitis b virus and hepatitis c virus viral load monitoring for hiv is of particular interest in the treatment of people with hiv as this is continually discussed in the context of management of hivaids an undetectable viral load does not implicate a lack of infection hiv positive patients on longterm combination antiretroviral therapy may present with an undetectable viral load on most clinical assays since the concentration of virus particles is below the limit of detection lod a 2010 review study by puren et al categorizes viral load testing into three types 1 nucleic acid amplification based tests nats or naats commercially available in the united states with food and drug administration fda approval or on the market in the european economic area eea with the ce marking 2 home – brew or inhouse nats 3 nonnucleic acidbased test there are many different molecular based test methods for quantifying the viral load using nats the starting material for amplification can be used to divide these molecular methods into three groups target amplification which uses the nucleic acid itself just a few of the'</li></ul> |
| 5 | <ul><li>'greater than zeroas an example of a low estimate combining nasas star formation rates the rare earth hypothesis value of fp · ne · fl 10−5 mayrs view on intelligence arising drakes view of communication and shermers estimate of lifetime r∗ 15 – 3 yr−1 fp · ne · fl 10−5 fi 10−9 fc 02drake above and l 304 yearsgives n 15 × 10−5 × 10−9 × 02 × 304 91 × 10−13ie suggesting that we are probably alone in this galaxy and possibly in the observable universe on the other hand with larger values for each of the parameters above values of n can be derived that are greater than 1 the following higher values that have been proposed for each of the parameters r∗ 15 – 3 yr−1 fp 1 ne 02 fl 013 fi 1 fc 02drake above and l 109 yearsuse of these parameters gives n 3 × 1 × 02 × 013 × 1 × 02 × 109 15600000monte carlo simulations of estimates of the drake equation factors based on a stellar and planetary model of the milky way have resulted in the number of civilizations varying by a factor of 100 in 2016 adam frank and woodruff sullivan modified the drake equation to determine just how unlikely the event of a technological species arising on a given habitable planet must be to give the result that earth hosts the only technological species that has ever arisen for two cases a this galaxy and b the universe as a whole by asking this different question one removes the lifetime and simultaneous communication uncertainties since the numbers of habitable planets per star can today be reasonably estimated the only remaining unknown in the drake equation is the probability that a habitable planet ever develops a technological species over its lifetime for earth to have the only technological species that has ever occurred in the universe they calculate the probability of any given habitable planet ever developing a technological species must be less than 25×10−24 similarly for earth to have been the only case of hosting a technological species over the history of this galaxy the odds of a habitable zone planet ever hosting a technological species must be less than 17×10−11 about 1 in 60 billion the figure for the universe implies that it is extremely unlikely that earth hosts the only technological species that has ever occurred on the other hand for this galaxy one must think that fewer than 1 in 60 billion habitable planets develop a technological species for there not to have been at least a second case of such a species over the past history of this galaxy as many observers have pointed'</li><li>'the possibility of life on venus is a subject of interest in astrobiology due to venuss proximity and similarities to earth to date no definitive evidence has been found of past or present life there in the early 1960s studies conducted via spacecraft demonstrated that the current venusian environment is extreme compared to earths studies continue to question whether life could have existed on the planets surface before a runaway greenhouse effect took hold and whether a relict biosphere could persist high in the modern venusian atmosphere with extreme surface temperatures reaching nearly 735 k 462 °c 863 °f and an atmospheric pressure 92 times that of earth the conditions on venus make waterbased life as we know it unlikely on the surface of the planet however a few scientists have speculated that thermoacidophilic extremophile microorganisms might exist in the temperate acidic upper layers of the venusian atmosphere in september 2020 research was published that reported the presence of phosphine in the planets atmosphere a potential biosignature however doubts have been cast on these observationsas of 8 february 2021 an updated status of studies considering the possible detection of lifeforms on venus via phosphine and mars via methane was reported on 2 june 2021 nasa announced two new related missions to venus davinci and veritas because venus is completely covered in clouds human knowledge of surface conditions was largely speculative until the space probe era until the mid20th century the surface environment of venus was believed to be similar to earth hence it was widely believed that venus could harbor life in 1870 the british astronomer richard a proctor said the existence of life on venus was impossible near its equator but possible near its poles science fiction writers were free to imagine what venus might be like until the 1960s among the speculations were that it had a junglelike environment or that it had oceans of either petroleum or carbonated water microwave observations published by c mayer et al in 1958 indicated a hightemperature source 600 k strangely millimetreband observations made by a d kuzmin indicated much lower temperatures two competing theories explained the unusual radio spectrum one suggesting the high temperatures originated in the ionosphere and another suggesting a hot planetary surface in 1962 mariner 2 the first successful mission to venus measured the planets temperature for the first time and found it to be about 500 degrees celsius 900 degrees fahrenheit since then increasingly clear evidence from various space probes showed venus has an extreme climate with a greenhouse effect generating a constant temperature of about 500 °c 932 °f on the surface the atmosphere contains sulfuric acid clouds in 1968 nasa reported that air pressure on'</li><li>'##restrial life popular magazine entertainment weekly gave the book a grade of b saying it was not an easy read but calling it a live elegant overview it was reviewed by nature physics today and new scientist with the latter commenting on occasional digressions but declaring the book beautifully written reader reviews are 85 five stars on amazon and over 90 like the book on goodreads the 2011 paperback edition has updates to help keep up with the accelerating pace of exoplanet discovery'</li></ul> |
| 41 | <ul><li>'from the current plaza de la universidad his motto was e daniel molina project no documentation of this project is preserved except for the proposed solution for the plaza de cataluna his motto was hygiene comfort and beautyjosep fontsere project jose fontsere was a young architect son of the municipal architect jose fontsere domenech and won the third runnerup prize with a project that enhanced the centrality of passeig de gracia and linked the neighboring centers with a set of diagonals that respected their original plots his motto was do not destroy to build but conserve to rectify and build to enlarge garriga i roca project the municipal architect miquel garriga i roca presented six projects the best qualified responded to a grid solution that linked the city with gracia leaving only sketched lines that would have to continue developing the future plot his motto was one more sacrifice to contribute to the eixample of barcelonaother projects the project of josep massanes and that of jose maria planas proposed a mere extension while maintaining the wall around the new space the latter had a similarity with the project presented by the owners of the paseo de gracia since both projects were based on a mere extension on both sides of the paseo de gracia two other simpler projects were that of tomas bertran soler who proposed a new neighborhood in place of the citadel converting the passeig de sant joan into an axis similar to the rambla and a very elementary one attributed to francisco soler mestres who died three days before the reading of the prizes according to the municipal council the winning project was a proposal by antoni rovira based on a circular mesh that enveloped the walled city and grew radially harmoniously integrating the surrounding villages it was presented with the slogan le trace dune ville est oeuvre du temps plutot que darchitecte the phrase is originally from leonce reynaud an architectural reference of rovira it was structured in three areas where the different sectors of the population were combined with social activities with a logic of neighborhoods and hierarchy of space and public services based on a proposal to replace the wall a mesh of rectangular blocks with a central courtyard and a height of 19 meters was deployed a few main streets were the junction between blocks of the hippodamus structure to readjust the square profile to the semicircle that surrounded the city rovira proposes his solution with a clear center located in the plaza de cataluna while cerda moved the centrality to the plaza de la gloria'</li><li>'to hire opticos design inc in berkeley california to draft the codebecause of the growing number of consultants advertising themselves as capable of writing fbcs but with little or no training in 2004 the nonprofit formbased codes institute was organized to establish standards and teach best practices in addition smartcode workshops are regularly scheduled by placemakerscom smartcodeprocom and smartcodelocalcom in spring 2014 a new graduatelevel studio dedicated to formbased coding was launched at california state polytechnic university “ formbased codes in the context of integrated urbanism ” is one of the only full courses on the subject in the country the course is taught by tony perez director of formbased coding at opticos design formbased codes commonly include the following elements regulating plan a plan or map of the regulated area designating the locations where different building form standards apply based on clear community intentions regarding the physical character of the area being coded public space standards specifications for the elements within the public realm eg sidewalks travel lanes onstreet parking street trees street furniture etc building form standards regulations controlling the configuration features and functions of buildings that define and shape the public realm administration a clearly defined application and project review process definitions a glossary to ensure the precise use of technical termsformbased codes also sometimes include architectural standards regulations controlling external architectural materials and quality landscaping standards regulations controlling landscape design and plant materials on private property as they impact public spaces eg regulations about parking lot screening and shading maintaining sight lines insuring unobstructed pedestrian movements etc signage standards regulations controlling allowable signage sizes materials illumination and placement environmental resource standards regulations controlling issues such as storm water drainage and infiltration development on slopes tree protection solar access etc annotation text and illustrations explaining the intentions of specific code provisions the types of buildings that make for a lively main street are different from the types of buildings that make for a quiet residential street building form standards are sets of enforceable design regulations for controlling building types and how they impact the public realm these standards are mapped to streets on a regulating plan building form standards can control such things as the alignment of buildings to the street how close buildings are to sidewalks the visibility and accessibility of building entrances minimum and maximum buildings heights minimum or maximum lot frontage coverage minimum and maximum amounts of window coverage on facades physical elements required on buildings eg stoops porches types of permitted balconies and the general usage of floors eg office residential or retail these regulations are less concerned with architectural styles and designs than in how buildings shape public spaces if a local government also wishes to'</li><li>'a parisian influencehowever city beautiful was not solely concerned with aesthetics the term ‘ beautility ’ derived from the american city beautiful philosophy which meant that the beautification of a city must also be functional beautility including the proven economic value of improvements influenced australian town planningthere were no formal city beautiful organisations that led this movement in australia rather it was influenced by communications among professionals and bureaucrats in particular architectplanners and local government reformers in the early federation era some influential australians were determined that their cities be progressive and competitive adelaide was used as an australian example of the “ benefits of comprehensive civic design ” with its ring of parklands beautification of the city of hobart for example was considered a way to increase the city ’ s popularity as a tourist destination walter burley griffin incorporated city beautiful principles for his design for canberra griffin was influenced by washington dc with grand axes and vistas and a strong central focal point with specialised centres and being a landscape architect used the landscape to complement this layout john sulman however was australias leading proponent of the city beautiful movement and in 1921 wrote the book an introduction to australian city planning both the city beautiful and the garden city philosophies were represented by sulman ’ s “ geometric or contour controlled ” designs of the circulatory road systems in canberra the widths of pavements were also reduced and vegetated areas were increased such as planted road verges melbourne ’ s grid plan was considered dull and monotonous by some people and so the architect william campbell designed a blueprint for the city the main principle behind this were diagonal streets providing sites for new and comprehensive architecture and for special buildings the designs of paris and washington were major inspirations for this plan world war i prolonged the city beautiful movement in australia where more memorials were erected than in any other country although city beautiful or artistic planning became a part of comprehensive town planning the great depression of the 1930s largely ended this fashion defensible space garden city movement mira lloyd dock and the progressive era conservation movement van nus w 1975 the fate of city beautiful thought in canada 1893 – 1930 historical papers communications historiques edmonton the canadian historical associationla societe historique du canada 10 1 191 – 210 doi107202030796ar'</li></ul> |
| 32 | <ul><li>'##tyle widehat tau alpha omega begincasesfrac left1leftr01alpha right2rightleft1leftr02alpha right2rightleft1r01alpha r02alpha exp left2ikz0lrightright2textif krho leq omega cfrac 4im leftr01alpha rightim leftr02alpha rightexp left2leftkz0rightlrightleft1r01alpha r02alpha exp left2leftkz0rightlrightright2textif krho omega cendcases where r 0 j α displaystyle r0jalpha are the fresnel reflection coefficients for α s p displaystyle alpha sp polarized waves between media 0 and j 1 2 displaystyle j12 k z 0 ω c 2 − k ρ 2 displaystyle kz0sqrt omega c2krho 2 is the component of the wavevector in the region 0 perpendicular to the surface of the halfspace l displaystyle l is the separation distance between the two halfspaces and c displaystyle c is the speed of light in vacuumcontributions to heat transfer for which k ρ ≤ ω c displaystyle krho leq omega c arise from propagating waves whereas contributions from k ρ ω c displaystyle krho omega c arise from evanescent waves thermophotovoltaic energy conversion thermal rectification localized cooling heatassisted magnetic recording'</li><li>'francis 1852 pp 238 – 333 cited page numbers are from the translation a fresnel ed h de senarmont e verdet and l fresnel 1866 – 70 oeuvres completes daugustin fresnel 3 volumes paris imprimerie imperiale vol 1 1866 vol 2 1868 vol 3 1870 e hecht 2017 optics 5th ed pearson education isbn 9781292096933 c huygens 1690 traite de la lumiere leiden van der aa translated by sp thompson as treatise on light university of chicago press 1912 project gutenberg 2005 cited page numbers match the 1912 edition and the gutenberg html edition b powell july 1856 on the demonstration of fresnels formulas for reflected and refracted light and their applications philosophical magazine and journal of science series 4 vol 12 no 76 pp 1 – 20 ja stratton 1941 electromagnetic theory new york mcgrawhill e t whittaker 1910 a history of the theories of aether and electricity from the age of descartes to the close of the nineteenth century london longmans green co'</li><li>'to compensate for this change as an example the index drop for different glass types is displayed in the picture on the right for different annealing rates note that the annealing rate is not necessarily constant during the cooling process typical “ average ” annealing rates for precision molding are between 1000 kh and 10000 kh or higher not only the refractive index but also the abbenumber of the glass is changed due to fast annealing the shown points in the picture on the right indicate an annealing rate of 3500khsocalled lowtgglasses with a maximum transition temperature of less than 550 °c have been developed in order to enable new manufacturing routes for the moulds mould materials such as steel can be used for moulding lowtgglasses whereas hightg – glasses require a hightemperature mould material such as tungsten carbide the mould material must have sufficient strength hardness and accuracy at high temperature and pressure good oxidation resistance low thermal expansion and high thermal conductivity are also required the material of the mould has to be suitable to withstand the process temperatures without undergoing deforming processes therefore the mould material choice depends critically on the transition temperature of the glass material for lowtgglasses steel moulds with a nickel alloy coating can be used since they cannot withstand the high temperatures required for regular optical glasses heatresistant materials such as carbide alloys have to be used instead in this case in addition mould materials include aluminium alloys glasslike or vitreous carbon silicon carbide silicon nitride and a mixture of silicon carbide and carbona commonly used material in mould making is tungsten carbide the mould inserts are produced by means of powder metallurgy ie a sintering process followed by postmachining processes and sophisticated grinding operations most commonly a metallic binder usually cobalt is added in liquid phase sintering in this process the metallic binder improves the toughness of the mould as well as the sintering quality in the liquid phase to fully dense material moulds made of hard materials have a typical lifetime of thousands of parts size dependent and are costeffective for volumes of 2001000 depending upon the size of the part this article describes how mould inserts are manufactured for precision glass moulding in order to ensure high quality standards metrology steps are implemented between each process step powder processing this process step is responsible for achieving grain sizes suitable for pressing and machining the powder is processed by milling the raw material pressing'</li></ul> |
| 17 | <ul><li>'the 20th century however the glacier is still over 30 km 19 mi long in sikkim 26 glaciers examined between the years 1976 and 2005 were retreating at an average rate of 1302 m 427 ft per year overall glaciers in the greater himalayan region that have been studied are retreating an average of between 18 and 20 m 59 and 66 ft annually the only region in the greater himalaya that has seen glacial advances is in the karakoram range and only in the highest elevation glaciers but this has been attributed possibly increased precipitation as well as to the correlating glacial surges where the glacier tongue advances due to pressure build up from snow and ice accumulation further up the glacier between the years 1997 and 2001 68 km 42 mi long biafo glacier thickened 10 to 25 m 33 to 82 ft midglacier however it did not advance with the retreat of glaciers in the himalayas a number of glacial lakes have been created a growing concern is the potential for glofs researchers estimate 21 glacial lakes in nepal and 24 in bhutan pose hazards to human populations should their terminal moraines fail one glacial lake identified as potentially hazardous is bhutans raphstreng tsho which measured 16 km 099 mi long 096 km 060 mi wide and 80 m 260 ft deep in 1986 by 1995 the lake had swollen to a length of 194 km 121 mi 113 km 070 mi in width and a depth of 107 m 351 ft in 1994 a glof from luggye tsho a glacial lake adjacent to raphstreng tsho killed 23 people downstreamglaciers in the akshirak range in kyrgyzstan experienced a slight loss between 1943 and 1977 and an accelerated loss of 20 of their remaining mass between 1977 and 2001 in the tien shan mountains which kyrgyzstan shares with china and kazakhstan studies in the northern areas of that mountain range show that the glaciers that help supply water to this arid region lost nearly 2 km3 048 cu mi of ice per year between 1955 and 2000 the university of oxford study also reported that an average of 128 of the volume of these glaciers had been lost per year between 1974 and 1990the pamirs mountain range located primarily in tajikistan has approximately eight thousand glaciers many of which are in a general state of retreat during the 20th century the glaciers of tajikistan lost 20 km3 48 cu mi of ice the 70 km 43 mi long fedchenko glacier which is the largest in tajikistan and the largest nonpolar glacier on earth retreated 1 km 062 mi between the years 1933 and 2006 and lost 44 km2 17 sq mi of its surface area due'</li><li>'sheets a 3d icesheet model which accounts for polythermal conditions coexistence of ice at and below the melting point in different parts of an ice sheet'</li><li>'made of the glaciers form and expected depth and the results were in quite good agreement with their expectations in total blumcke and hess completed 11 holes to the glacier bed between 1895 and 1909 and drilled many more holes that did not penetrate the glacier the deepest hole they drilled was 224 m vallot dutoit and mercanton in 1897 emile vallot drilled a 25 m hole in the mer de glace using a 3 m high cable tool with a steel drillbit which had crossshaped blades and weighed 7 kg this proved to be too light to drill effectively and only 1 m progress was made on the first day a 20 kg iron rod was added and progress improved to 2 m per hour a stick was used to twist the rope above the hole and as it untwisted it cut a circular hole the hole diameter was 6 cm the rope was also pulled back and let fall so the drill used a combination of percussion and rotational cutting the drilling site was chosen to be near a small stream so that the hole could be continuously replenished with water in order to carry away the fragments of ice released at the bottom of the hole by the drilling process the ice chips were encouraged to flow up the hole by raising the drillbit higher every ten strokes for three strokes in a row the drilling gear was removed from the hole each night to prevent it freezing in placewhen the hole reached 205 m the 20 kg rod was no longer enough to counteract the braking effect of the water in the hole and progress slowed again to 1 m per hour a new rod weighing 40 kg was forged in chamonix which brought the speed back up to 28 m per hour but at 25 m the drill bit stuck in the hole near the bottom vallot poured salt down the hole to try to melt the ice and lowered a piece of iron to try to knock it loose but the hole had to be abandoned emile vallots son joseph vallot wrote a description of the drilling project and concluded that to be successful ice drilling should be done as quickly as possible perhaps in shifts and that the drill should have cutting edges so that any deformation to the hole would be corrected as the drill was reinserted into the hole which would avoid the drill bit wedging as happened in this caseconstant dutoit and paullouis mercanton carried out experiments on the trient glacier in 1900 in response to a problem posed by the swiss society of natural sciences in 1899 for their annual prix schlafli a scientific prize the problem was to determine the internal speed of flow of a glacier by'</li></ul> |
| 38 | <ul><li>'esperanto studies in 20182019 the program celebrated its 20th year from 1982 to 1996 together with the united nations office of conference services crd organized an annual conference in new york city for most of the early years crd published annual conference reports with all papers given at the conference in question the center now publishes in cooperation with university press of america a series of monographs which includes selected papers from the conferences'</li><li>'language management is a discipline that consists of satisfying the needs of people who speak multiple different languages these may be in the same country in companies and in cultural or international institutions where one must use multiple languages there are currently about 6000 languages in the world 85 of which are protected by sovereign states the universal declaration of unesco on cultural diversity in 2001 recalls the richness of global cultural heritage which comes from its cultural diversity this intangible cultural heritage passed down from generation to generation is constantly recreated by communities and groups according to their environment their interaction with nature and their history and brings a feeling of identity and of continuity thus contributing to the promotion of respect of cultural diversity and human creativity the declaration of montreal in 2007 repeated this concern unesco organized a conference on multilingualism for cultural diversity and participation of all in cyberspace in bamako mali on may 6 and 7 2005 in partnership with the african academy of languages acalan the organisation internationale de la francophonie oif and the government of mali as well as other international organizations unesco is otherwise responsible for the introduction of the concept of intangible cultural heritage which manages the cultural heritage in terms of its information support for example text and images associated with the louvre museum in france are part of the intangible cultural heritage and it goes without saying that the diversity of the visitors requires the management of text in several languages this meeting aimed to prepare the second phase of the world summit of the society of information held in tunis tunisia 16 to 18 of november 2005 the other part the phenomenon of globalization produces exchanges which requires the management of different languages at the nodes of interconnection airports parking lots the internet finally produces commercial exchanges indifferent to linguistic frontiers and virtual communities like wikipedia are where the participants speaking different languages can dialog and exchange information and knowledge international institutions governments and firms are faced with language management needs in international institutions languages can have different statutes official language or work language plenty of states have multiple official languages in their territory this is the case in belgium dutch french german in switzerland german french italian romansch in canada french and english in numerous african countries and in luxembourg french german luxembourgish in france where many regional languages exist especially in the regions on the border crossborder languages and in brittany breton none of them have official status therefore a certain number of states have put linguistic policies in place on a larger scale the european union has also defined a linguistic policy which distinguishes 23 official languages upon entrance to school children of diverse cultures are forced to abandon their cultural roots and their mother tongues to the benefit of the normative language chosen by the school research has shown that'</li><li>'or during military service in other contexts it has come to seem excessively formal and oldfashioned to most danes even at job interviews and among parliamentarians du has become standard in written danish de remains current in legal legislative and formal business documents as well as in some translations from other languages this is sometimes audiencedependent as in the danish governments general use of du except in healthcare information directed towards the elderly where de is still used other times it is maintained as an affectation as by the staff of some formal restaurants the weekendavisen newspaper tv 2 announcers and the avowedly conservative maersk corporation attempts by other corporations to avoid sounding either stuffy or too informal by employing circumlocutions — using passive phrasing or using the pronoun man one — have generally proved awkward and been illreceived and with the notable exception of the national railway dsb most have opted for the more personable du form icelandic modern icelandic is the scandinavian language closest to old norse which made a distinction between the plural þer and the dual þið this distinction continued in written icelandic the early 1920 when the plural þer was also used on formal occasions the formal usage of þer seems to have pushed the dual þið to take over the plural so modern icelandic normally uses þið as a plural however in formal documents such as by the president þer is still used as plural and the usage of þer as plural and þið as dual is still retained in the icelandic translation of the christian scriptures there are still a number of fixed expressions — particularly religious adages such as seek and ye shall find leitið og þer munuð finna — and the formal pronoun is sometimes used in translations from a language that adheres to a t – v distinction but otherwise it appears only when one wants to be excessively formal either from the gravity of the occasion as in court proceedings and legal correspondence or out of contempt in order to ridicule another persons selfimportance and þu is used in all other cases norwegian in norwegian the polite form dedem bokmal and dedykk nynorsk has more or less disappeared in both spoken and written language norwegians now exclusively use du and the polite form does not have a strong cultural pedigree in the country until recently de would sometimes be found in written works business letters plays and translations where an impression of formality must be retained the popular belief that de is reserved for the king is incorrect since according to royal etiquette the king and'</li></ul> |
| 15 | <ul><li>'aicardi – goutieres syndrome ags which is completely distinct from the similarly named aicardi syndrome is a rare usually early onset childhood inflammatory disorder most typically affecting the brain and the skin neurodevelopmental disorder the majority of affected individuals experience significant intellectual and physical problems although this is not always the case the clinical features of ags can mimic those of in utero acquired infection and some characteristics of the condition also overlap with the autoimmune disease systemic lupus erythematosus sle following an original description of eight cases in 1984 the condition was first referred to as aicardi – goutieres syndrome ags in 1992 and the first international meeting on ags was held in pavia italy in 2001ags can occur due to mutations in any one of a number of different genes of which nine have been identified to date namely trex1 rnaseh2a rnaseh2b rnaseh2c which together encode the ribonuclease h2 enzyme complex samhd1 adar1 and ifih1 coding for mda5 this neurological disease occurs in all populations worldwide although it is almost certainly underdiagnosed to date 2014 at least 400 cases of ags are known the initial description of ags suggested that the disease was always severe and was associated with unremitting neurological decline resulting in death in childhood as more cases have been identified it has become apparent that this is not necessarily the case with many patients now considered to demonstrate an apparently stable clinical picture alive in their 4th decade moreover rare individuals with pathogenic mutations in the agsrelated genes can be minimally affected perhaps only with chilblains and are in mainstream education and even affected siblings within a family can show marked differences in severityin about ten percent of cases ags presents at or soon after birth ie in the neonatal period this presentation of the disease is characterized by microcephaly neonatal seizures poor feeding jitteriness cerebral calcifications accumulation of calcium deposits in the brain white matter abnormalities and cerebral atrophy thus indicating that the disease process became active before birth ie in utero these infants can have hepatosplenomegaly and thrombocytopaenia very much like cases of transplacental viral infection about one third of such early presenting cases most frequently in association with mutations in trex1 die in early childhoodotherwise the majority of ags cases present in early infancy sometimes after an apparently normal period of development during the first few months after birth these children develop'</li><li>'study of this gene transfer and its causes ecological genetics'</li><li>'not emerge until the 1990s this theory went through a series of transformations and elaborations until 2005 when bronfenbrenner died bronfenbrenner further developed the model by adding the chronosystem which refers to how the person and environments change over time he also placed a greater emphasis on processes and the role of the biological person the process – person – context – time model ppct has since become the bedrock of the bioecological model ppct includes four concepts the interactions between the concepts form the basis for the theory 1 process – bronfenbrenner viewed proximal processes as the primary mechanism for development featuring them in two central propositions of the bioecological modelproposition 1 human development takes place through processes of progressively more complex reciprocal interaction between an active evolving biopsychological human organism and the persons objects and symbols in its immediate external environment to be effective the interaction must occur on a fairly regular basis over extended periods of time such enduring forms of interaction in the immediate environment are referred to as proximal processesproximal processes are the development processes of systematic interaction between person and environment bronfenbrenner identifies group and solitary activities such as playing with other children or reading as mechanisms through which children come to understand their world and formulate ideas about their place within it however processes function differently depending on the person and the contextproposition 2 the form power content and direction of the proximal processes effecting development vary systematically as a joint function of the characteristics of the developing person of the environment — both immediate and more remote — in which the processes are taking place the nature of the developmental outcomes under consideration and the social continuities and changes occurring over time through the life course and the historical period during which the person has lived2 person – bronfenbrenner acknowledged the role that personal characteristics of individuals play in social interactions he identified three personal characteristics that can significantly influence proximal processes across the lifespan demand characteristics such as age gender or physical appearance set processes in motion acting as “ personal stimulus ” characteristics resource characteristics are not as immediately recognizable and include mental and emotional resources such as past experiences intelligence and skills as well as material resources such as access to housing education and responsive caregivers force characteristics are related to variations in motivation persistence and temperament bronfenbrenner notes that even when children have equivalent access to resources their developmental courses may differ as a function of characteristics such as drive to succeed and persistence in the face of hardship in doing this bronfenbrenner provides a'</li></ul> |
| 34 | <ul><li>'different settings and populations such as by refugees in san diego seeking in – person medical interpretation options by homeless adults in ann arbor michigan by dr claudia mitchell to support community health workers and teachers in rural south africa and by dr laura s lorenz of the heller school for social policy and management at brandeis university in her work with brain injury survivors photovoice has been adopted by multiple disciplines often used in conjunction with other communitybased and participatory action research methods in modern research photovoice is a qualitative approach for addressing sensitive and complex issues that allows individuals to openly share their perspectives where one might otherwise be reluctant to do photovoice is used to both to elicit and analyze data in the interest knowledge dissemination and mobilization researchers who employ photovoice offer a nuanced understanding of community issues to the scientific community the aim of this understanding is to inform and create appropriate interventions and actions regarding complex problems including but not limited to health and wellbeing social inequality and socioeconomic disparity for example in higher education the photovoice model has been used to teach social work students photovoice has also been used as a tool to engage children and youth giving them a safe environment and opportunity to communicate concerns and coping strategies to policymakers and service providers overall the modern implementation of photovoice is utilized to investigate a persons lived experience concerning systemic structures and social power relations and communicate this experience through a medium reaching beyond verbal communication also known as participatory photography or photo novella photovoice is considered a sub – type of participatory visual methods or picturevoice which includes techniques such as photoelicitation and digital storytelling these techniques allow research participants to create visuals that capture their individual perspectives as part of the research process an example of this is found in project lives a participatory photography project used to create a new image of project housing dwellers published in april 2015 two other forms of picturevoice include paintvoice stemming from the work of michael yonas and comicvoice which has been pioneered by john bairds create a comic project since 2008 and to a lesser extent by michael bitzs comic book project in international research photovoice has been seen to allow participants from the developing world to define how they want to be represented to the international community the individuals are facilitated and given control to tell their stories and perspectives which empower them to be engaged and maintain a firm sense of authorship over their representations this helps to convey a stereotypefree picture of what it means to live in a developing country to those supporting ie funders'</li><li>'an active suzukitraining organ scheme is under way in the australian city of newcastle the application of suzukis teaching philosophy to the mandolin is currently being researched in italy by amelia saracco rather than focusing on a specific instrument at the stage of early childhood education ece a suzuki early childhood education sece curriculum for preinstrumental ece was developed within the suzuki philosophy by dorothy sharon jones saa jeong cheol wong asa emma okeefe ppsa anke van der bijl esa and yasuyo matsui teri the sece curriculum is designed for ages 0 – 3 and uses singing nursery rhymes percussion audio recordings and whole body movements in a group setting where children and their adult caregivers participate side by side the japanese based sece curriculum is different from the englishbased sece curriculum the englishbased curriculum is currently being adapted for use in other languages a modified suzuki philosophy curriculum has been developed to apply suzuki teaching to heterogeneous instrumental music classes string orchestras in schools trumpet was added to the international suzuki associations list of suzuki method instruments in 2011 the application of suzukis teaching philosophy to the trumpet is currently being researched in sweden the first trumpet teacher training course to be offered by the european suzuki association in 2013 suzuki teacher training for trumpet 2013 supplementary materials are also published under the suzuki name including some etudes notereading books piano accompaniment parts guitar accompaniment parts duets trios string orchestra and string quartet arrangements of suzuki repertoire in the late 19th century japans borders were opened to trade with the outside world and in particular to the importation of western culture as a result of this suzukis father who owned a company which had manufactured the shamisen began to manufacture violins instead in his youth shinichi suzuki chanced to hear a phonograph recording of franz schuberts ave maria as played on violin by mischa elman gripped by the beauty of the music he immediately picked up a violin from his fathers factory and began to teach himself to play the instrument by ear his father felt that instrumental performance was beneath his sons social status and refused to allow him to study the instrument at age 17 he began to teach himself by ear since no formal training was allowed to him eventually he convinced his father to allow him to study with a violin teacher in tokyo suzuki nurtured by love at age 22 suzuki travelled to germany to find a violin teacher to continue his studies while there he studied privately with karl klingler but did not receive any formal degree past his high school diploma he met and became friends with albert einstein who encouraged him in learning classical music he also met court'</li><li>'##act the technical course practically schoolbased enterprise a schoolbased enterprise is a simulated or actual business run by the school it offers students a learning experience by letting them manage the various aspects of a business service learningthis strategy combines community service with career where students provide volunteer service to public and nonprofit agencies civic and government offices etc student the student is central to the wbl process the student engages in a wbl program and completes all requirements of the program maintains high degree of professionalism and acquires necessary competencies for which the wbl program was designed business mentor a business mentor sets realistic goals for the student to acquire engages and supervises them to complete their tasks and is a role model for the student to emulate teacher coordinator a teacher coordinator is a certified educator who manages the wbl program and checks on the student progress and supports whenever required to ensure successful completion of the wbl program school administrator the school administrator is key in introducing wbl programs within the curriculum after identifying the appropriate courses that can be learnt through the program parents parental support enables successful completion of the wbl program as offer suitable guidance support and motivation to their wards and approve the wbl program that would be most suitable for meeting their wards learning needs and career aspirations application of classroom learning in realworld setting establishment of connection between school and work improvement in critical thinking analytical reasoning and logical abilities expansion of curriculum and learning facilities meeting the diverse needs of the learner creating a talented and skilled pool of future employees reduces preservice training time and cost improvement of student awareness of career opportunities making education relevant and valuable to the social context community building exercise for productive economy timeconsuming activity to identify key courses that can be taught via wbl programs needs careful consideration and planning when introducing wbl strategies within the existing curriculum certain wbl programs may not be in sync with the formal education timelines and pattern it is unclear what key elements of this learning may be and that readily available indicators which equate with academic learning outcomes are not necessarily evoking it accuracy needs effective coordination between all key persons involved in the wbl program effective evaluation strategy needs to be developed for assessing student performance this should encompass both formative and summative feedback this article incorporates text from a free content work licensed under ccbysa igo 30 license statementpermission text taken from levelsetting and recognition of learning outcomes the use of level descriptors in the twentyfirst century 115 keevey james chakroun borhene unesco unesco workintegrated learning'</li></ul> |
## Evaluation
### Metrics
| Label | F1 |
|:--------|:-------|
| **all** | 0.7541 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("udrearobert999/multi-qa-mpnet-base-cos-v1-contrastive-3e-250samples-20iter")
# Run inference
preds = model("##rch procedure that evaluates the objective function p x displaystyle pmathbf x on a grid of candidate source locations g displaystyle mathcal g to estimate the spatial location of the sound source x s displaystyle textbf xs as the point of the grid that provides the maximum srp modifications of the classical srpphat algorithm have been proposed to reduce the computational cost of the gridsearch step of the algorithm and to increase the robustness of the method in the classical srpphat for each microphone pair and for each point of the grid a unique integer tdoa value is selected to be the acoustic delay corresponding to that grid point this procedure does not guarantee that all tdoas are associated to points on the grid nor that the spatial grid is consistent since some of the points may not correspond to an intersection of hyperboloids this issue becomes more problematic with coarse grids since when the number of points is reduced part of the tdoa information gets lost because most delays are not anymore associated to any point in the grid the modified srpphat collects and uses the tdoa information related to the volume surrounding each spatial point of the search grid by considering a modified objective function where l m 1 m 2 l x displaystyle lm1m2lmathbf x and l m 1 m 2 u x displaystyle lm1m2umathbf x are the lower and upper accumulation limits of gcc delays which depend on the spatial location x displaystyle mathbf x the accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid alternatively they can be selected by considering the spatial gradient of the tdoa ∇ τ m 1 m 2 x ∇ x τ m 1 m 2 x ∇ y τ m 1 m 2 x ∇ z τ m 1 m 2 x t displaystyle nabla tau m1m2mathbf x nabla xtau m1m2mathbf x nabla ytau m1m2mathbf x nabla ztau m1m2mathbf x t where each component γ ∈ x y z displaystyle gamma in leftxyzright of the gradient is for a rectangular grid where neighboring points are separated a distance r displaystyle r the lower and upper accumulation limits are given by where d r 2 min 1 sin θ cos [UNK] 1 sin θ sin [UNK] 1 cos θ displaystyle dr2min leftfrac 1vert sintheta cosphi vert frac 1vert sintheta sinphi vert frac 1vert")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:---------|:----|
| Word count | 1 | 369.7392 | 509 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 250 |
| 1 | 250 |
| 2 | 250 |
| 3 | 250 |
| 4 | 250 |
| 5 | 250 |
| 6 | 250 |
| 7 | 250 |
| 8 | 250 |
| 9 | 250 |
| 10 | 250 |
| 11 | 250 |
| 12 | 250 |
| 13 | 250 |
| 14 | 250 |
| 15 | 250 |
| 16 | 250 |
| 17 | 250 |
| 18 | 250 |
| 19 | 250 |
| 20 | 250 |
| 21 | 250 |
| 22 | 250 |
| 23 | 250 |
| 24 | 250 |
| 25 | 250 |
| 26 | 250 |
| 27 | 250 |
| 28 | 250 |
| 29 | 250 |
| 30 | 250 |
| 31 | 250 |
| 32 | 250 |
| 33 | 250 |
| 34 | 250 |
| 35 | 250 |
| 36 | 250 |
| 37 | 250 |
| 38 | 250 |
| 39 | 250 |
| 40 | 250 |
| 41 | 250 |
| 42 | 250 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (3, 8)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 0.01)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- max_length: 512
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:--------:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.2586 | - |
| 0.0930 | 2500 | 0.0925 | - |
| 0.1860 | 5000 | 0.0273 | - |
| **0.2791** | **7500** | **0.1452** | **0.0893** |
| 0.3721 | 10000 | 0.0029 | - |
| 0.4651 | 12500 | 0.0029 | - |
| 0.5581 | 15000 | 0.0702 | 0.106 |
| 0.6512 | 17500 | 0.0178 | - |
| 0.7442 | 20000 | 0.0047 | - |
| 0.8372 | 22500 | 0.0006 | 0.1142 |
| 0.9302 | 25000 | 0.0191 | - |
| 1.0233 | 27500 | 0.0018 | - |
| 1.1163 | 30000 | 0.0061 | 0.1482 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"TRANSLATION"
] | [
"PCR"
] |
juanpablomesa/bge-base-financial-matryoshka | juanpablomesa | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9600",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-07-02T17:10:34 | 2024-07-02T17:10:50 | 47 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9600
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: The median home value in San Carlos, CA is $2,350,000.
sentences:
- What does the console property of the WorkerGlobalScope interface provide access
to?
- What is the last sold price and date for the property at 4372 W 14th Street Dr,
Greeley, CO 80634?
- What is the median home value in San Carlos, CA?
- source_sentence: The four new principals hired by Superintendent of Schools Ken
Kenworthy for the Okeechobee school system are Joseph Stanley at Central Elementary,
Jody Hays at Yearling Middle School, Tuuli Robinson at North Elementary, and Dr.
Thelma Jackson at Seminole Elementary School.
sentences:
- Who won the gold medal in the men's 1,500m final at the speed skating World Cup?
- What is the purpose of the 1,2,3 bowling activity for toddlers?
- Who are the four new principals hired by Superintendent of Schools Ken Kenworthy
for the Okeechobee school system?
- source_sentence: Twitter Audit is used to scan your followers and find out what
percentage of them are real people.
sentences:
- What is the main product discussed in the context of fair trade?
- What is the software mentioned in the context suitable for?
- What is the purpose of the Twitter Audit tool?
- source_sentence: Michael Czysz made the 2011 E1pc lighter and more powerful than
the 2010 version, and also improved the software controlling the bike’s D1g1tal
powertrain.
sentences:
- What changes did Michael Czysz make to the 2011 E1pc compared to the 2010 version?
- What is the author's suggestion for leaving a legacy for future generations?
- What is the most affordable and reliable option to fix a MacBook according to
the technician?
- source_sentence: HTC called the Samsung Galaxy S4 “mainstream”.
sentences:
- What is the essential aspect of the vocation to marriage according to Benedict
XVI's message on the 40th Anniversary of Humanae Vitae?
- What did HTC announce about the Samsung Galaxy S4?
- What was Allan Cox's First Class Delivery launched on for his Level 1 certification
flight?
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.9675
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9791666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9829166666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9675
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3263888888888889
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1965833333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09887499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9675
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9791666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9829166666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9776735843960416
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9741727843915341
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.974471752833939
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.9641666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9775
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9816666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98875
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9641666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3258333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1963333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09887499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9641666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9775
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9816666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98875
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9758504869144781
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9717977843915344
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9720465527215371
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.9620833333333333
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9741666666666666
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9804166666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98625
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9620833333333333
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32472222222222225
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1960833333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09862499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9620833333333333
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9741666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9804166666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98625
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9737941784937224
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9698406084656085
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9702070899963996
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.9554166666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.97
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9766666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.98375
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9554166666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3233333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1953333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09837499999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9554166666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.97
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9766666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.98375
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.969307497603498
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9647410714285715
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9652034022263717
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.9391666666666667
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9616666666666667
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9666666666666667
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9758333333333333
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9391666666666667
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3205555555555556
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1933333333333333
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09758333333333333
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9391666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9616666666666667
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9666666666666667
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9758333333333333
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9577277779716886
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9519417989417989
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9525399354798056
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("juanpablomesa/bge-base-financial-matryoshka")
# Run inference
sentences = [
'HTC called the Samsung Galaxy S4 “mainstream”.',
'What did HTC announce about the Samsung Galaxy S4?',
"What is the essential aspect of the vocation to marriage according to Benedict XVI's message on the 40th Anniversary of Humanae Vitae?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9675 |
| cosine_accuracy@3 | 0.9792 |
| cosine_accuracy@5 | 0.9829 |
| cosine_accuracy@10 | 0.9888 |
| cosine_precision@1 | 0.9675 |
| cosine_precision@3 | 0.3264 |
| cosine_precision@5 | 0.1966 |
| cosine_precision@10 | 0.0989 |
| cosine_recall@1 | 0.9675 |
| cosine_recall@3 | 0.9792 |
| cosine_recall@5 | 0.9829 |
| cosine_recall@10 | 0.9888 |
| cosine_ndcg@10 | 0.9777 |
| cosine_mrr@10 | 0.9742 |
| **cosine_map@100** | **0.9745** |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.9642 |
| cosine_accuracy@3 | 0.9775 |
| cosine_accuracy@5 | 0.9817 |
| cosine_accuracy@10 | 0.9888 |
| cosine_precision@1 | 0.9642 |
| cosine_precision@3 | 0.3258 |
| cosine_precision@5 | 0.1963 |
| cosine_precision@10 | 0.0989 |
| cosine_recall@1 | 0.9642 |
| cosine_recall@3 | 0.9775 |
| cosine_recall@5 | 0.9817 |
| cosine_recall@10 | 0.9888 |
| cosine_ndcg@10 | 0.9759 |
| cosine_mrr@10 | 0.9718 |
| **cosine_map@100** | **0.972** |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9621 |
| cosine_accuracy@3 | 0.9742 |
| cosine_accuracy@5 | 0.9804 |
| cosine_accuracy@10 | 0.9862 |
| cosine_precision@1 | 0.9621 |
| cosine_precision@3 | 0.3247 |
| cosine_precision@5 | 0.1961 |
| cosine_precision@10 | 0.0986 |
| cosine_recall@1 | 0.9621 |
| cosine_recall@3 | 0.9742 |
| cosine_recall@5 | 0.9804 |
| cosine_recall@10 | 0.9862 |
| cosine_ndcg@10 | 0.9738 |
| cosine_mrr@10 | 0.9698 |
| **cosine_map@100** | **0.9702** |
#### Information Retrieval
* Dataset: `dim_128`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9554 |
| cosine_accuracy@3 | 0.97 |
| cosine_accuracy@5 | 0.9767 |
| cosine_accuracy@10 | 0.9838 |
| cosine_precision@1 | 0.9554 |
| cosine_precision@3 | 0.3233 |
| cosine_precision@5 | 0.1953 |
| cosine_precision@10 | 0.0984 |
| cosine_recall@1 | 0.9554 |
| cosine_recall@3 | 0.97 |
| cosine_recall@5 | 0.9767 |
| cosine_recall@10 | 0.9838 |
| cosine_ndcg@10 | 0.9693 |
| cosine_mrr@10 | 0.9647 |
| **cosine_map@100** | **0.9652** |
#### Information Retrieval
* Dataset: `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9392 |
| cosine_accuracy@3 | 0.9617 |
| cosine_accuracy@5 | 0.9667 |
| cosine_accuracy@10 | 0.9758 |
| cosine_precision@1 | 0.9392 |
| cosine_precision@3 | 0.3206 |
| cosine_precision@5 | 0.1933 |
| cosine_precision@10 | 0.0976 |
| cosine_recall@1 | 0.9392 |
| cosine_recall@3 | 0.9617 |
| cosine_recall@5 | 0.9667 |
| cosine_recall@10 | 0.9758 |
| cosine_ndcg@10 | 0.9577 |
| cosine_mrr@10 | 0.9519 |
| **cosine_map@100** | **0.9525** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 9,600 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 50.19 tokens</li><li>max: 435 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 18.66 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| positive | anchor |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------|
| <code>The Berry Export Summary 2028 is a dedicated export plan for the Australian strawberry, raspberry, and blackberry industries. It maps the sectors’ current position, where they want to be, high-opportunity markets, and next steps. The purpose of this plan is to grow their global presence over the next 10 years.</code> | <code>What is the Berry Export Summary 2028 and what is its purpose?</code> |
| <code>Benefits reported from having access to Self-supply water sources include convenience, less time spent for fetching water and access to more and better quality water. In some areas, Self-supply sources offer important added values such as water for productive use, income generation, family safety and improved food security.</code> | <code>What are some of the benefits reported from having access to Self-supply water sources?</code> |
| <code>The unique features of the Coolands for Twitter app include Real-Time updates without the need for a refresh button, Avatar Indicator which shows small avatars on the title bar for new messages, Direct Link for intuitive and convenient link opening, Smart Bookmark to easily return to previous reading position, and User Level Notification which allows customized notification settings for different users.</code> | <code>What are the unique features of the Coolands for Twitter app?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
|:--------:|:------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|:----------------------:|
| 0.5333 | 10 | 0.6065 | - | - | - | - | - |
| 0.96 | 18 | - | 0.9583 | 0.9674 | 0.9695 | 0.9372 | 0.9708 |
| 1.0667 | 20 | 0.3313 | - | - | - | - | - |
| 1.6 | 30 | 0.144 | - | - | - | - | - |
| 1.9733 | 37 | - | 0.9630 | 0.9699 | 0.9716 | 0.9488 | 0.9745 |
| 2.1333 | 40 | 0.1317 | - | - | - | - | - |
| 2.6667 | 50 | 0.0749 | - | - | - | - | - |
| 2.9867 | 56 | - | 0.9650 | 0.9701 | 0.9721 | 0.9522 | 0.9747 |
| 3.2 | 60 | 0.088 | - | - | - | - | - |
| 3.7333 | 70 | 0.0598 | - | - | - | - | - |
| **3.84** | **72** | **-** | **0.9652** | **0.9702** | **0.972** | **0.9525** | **0.9745** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.5
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.31.0
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"MEDAL"
] |
rjnClarke/thenlper-gte-base-fine-tuned | rjnClarke | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:10359",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:thenlper/gte-base",
"base_model:finetune:thenlper/gte-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-06T13:27:19 | 2024-08-06T13:27:53 | 47 | 0 | ---
base_model: thenlper/gte-base
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@3
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@200
- cosine_map@100
- dot_accuracy@3
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@200
- dot_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:10359
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: Cleopatra reacts to the news of Antony's death with a mixture of
sadness and resignation, contemplating her own mortality and the fickle nature
of life.
sentences:
- "Immortal longings in me. Now no more The juice of Egypt's grape shall moist\
\ this lip. Yare, yare, good Iras; quick. Methinks I hear Antony call. I\
\ see him rouse himself To praise my noble act. I hear him mock The luck\
\ of Caesar, which the gods give men To excuse their after wrath. Husband,\
\ I come. Now to that name my courage prove my title! I am fire and air;\
\ my other elements I give to baser life. So, have you done? Come then,\
\ and take the last warmth of my lips. Farewell, kind Charmian. Iras, long\
\ farewell. [Kisses them. IRAS falls and dies] \
\ Have I the aspic in my lips? Dost fall? If thus thou and nature can so gently\
\ part, The stroke of death is as a lover's pinch, Which hurts and is desir'd.\
\ Dost thou lie still? If thou vanishest, thou tell'st the world It is\
\ not worth leave-taking. CHARMIAN. Dissolve, thick cloud, and rain, that I may\
\ say The gods themselves do weep. CLEOPATRA. This proves me base.\n \
\ If she first meet the curled Antony,\n"
- "BURGUNDY. Warlike and martial Talbot, Burgundy\n Enshrines thee in his heart,\
\ and there erects Thy noble deeds as valour's monuments. TALBOT. Thanks,\
\ gentle Duke. But where is Pucelle now? I think her old familiar is asleep.\
\ Now where's the Bastard's braves, and Charles his gleeks? What, all amort?\
\ Rouen hangs her head for grief That such a valiant company are fled. Now\
\ will we take some order in the town, Placing therein some expert officers;\
\ And then depart to Paris to the King, For there young Henry with his nobles\
\ lie. BURGUNDY. What Lord Talbot pleaseth Burgundy. TALBOT. But yet, before\
\ we go, let's not forget The noble Duke of Bedford, late deceas'd, But\
\ see his exequies fulfill'd in Rouen. A braver soldier never couched lance,\
\ A gentler heart did never sway in court; But kings and mightiest potentates\
\ must die, For that's the end of human misery. Exeunt\n"
- "Your suffering in this dearth, you may as well\n Strike at the heaven with\
\ your staves as lift them Against the Roman state; whose course will on \
\ The way it takes, cracking ten thousand curbs Of more strong link asunder\
\ than can ever Appear in your impediment. For the dearth, The gods, not\
\ the patricians, make it, and Your knees to them, not arms, must help. Alack,\
\ You are transported by calamity Thither where more attends you; and you\
\ slander The helms o' th' state, who care for you like fathers, When you\
\ curse them as enemies. FIRST CITIZEN. Care for us! True, indeed! They ne'er\
\ car'd for us yet. Suffer us to famish, and their storehouses cramm'd with\
\ grain; make edicts for usury, to support usurers; repeal daily any wholesome\
\ act established against the rich, and provide more piercing statutes daily\
\ to chain up and restrain the poor. If the wars eat us not up, they will;\
\ and there's all the love they bear us. MENENIUS. Either you must Confess\
\ yourselves wondrous malicious, Or be accus'd of folly. I shall tell you \
\ A pretty tale. It may be you have heard it; But, since it serves my purpose,\
\ I will venture To stale't a little more. FIRST CITIZEN. Well, I'll hear\
\ it, sir; yet you must not think to fob off our disgrace with a tale. But,\
\ an't please you, deliver. MENENIUS. There was a time when all the body's members\
\ Rebell'd against the belly; thus accus'd it: That only like a gulf it\
\ did remain I' th' midst o' th' body, idle and unactive, Still cupboarding\
\ the viand, never bearing Like labour with the rest; where th' other instruments\
\ Did see and hear, devise, instruct, walk, feel,\n And, mutually participate,\
\ did minister\n"
- source_sentence: How does the excerpt reflect themes of loyalty and sacrifice in
the play?
sentences:
- "me a thousand marks in links and torches, walking with thee in\n the night\
\ betwixt tavern and tavern; but the sack that thou hast drunk me would have\
\ bought me lights as good cheap at the dearest chandler's in Europe. I have\
\ maintained that salamander of yours with fire any time this two-and-thirty\
\ years. God reward me for it! Bard. 'Sblood, I would my face were in your\
\ belly! Fal. God-a-mercy! so should I be sure to be heart-burn'd.\n \
\ Enter Hostess. How now, Dame Partlet the hen? Have you enquir'd\
\ yet who pick'd\n my pocket? Host. Why, Sir John, what do you think, Sir\
\ John? Do you think I keep thieves in my house? I have search'd, I have enquired,\
\ so has my husband, man by man, boy by boy, servant by servant. The tithe\
\ of a hair was never lost in my house before. Fal. Ye lie, hostess. Bardolph\
\ was shav'd and lost many a hair, and I'll be sworn my pocket was pick'd.\
\ Go to, you are a woman, go! Host. Who, I? No; I defy thee! God's light, I was\
\ never call'd so in mine own house before! Fal. Go to, I know you well enough.\
\ Host. No, Sir John; you do not know me, Sir John. I know you, Sir John.\
\ You owe me money, Sir John, and now you pick a quarrel to beguile me of\
\ it. I bought you a dozen of shirts to your back. Fal. Dowlas, filthy dowlas!\
\ I have given them away to bakers' wives; they have made bolters of them.\
\ Host. Now, as I am a true woman, holland of eight shillings an ell. You\
\ owe money here besides, Sir John, for your diet and by-drinkings, and money\
\ lent you, four-and-twenty pound. Fal. He had his part of it; let him pay. \
\ Host. He? Alas, he is poor; he hath nothing. Fal. How? Poor? Look upon his\
\ face. What call you rich? Let them coin his nose, let them coin his cheeks.\
\ I'll not pay a denier.\n What, will you make a younker of me? Shall I not\
\ take mine ease\n"
- "EDWARD. I wonder how our princely father scap'd,\n Or whether he be scap'd\
\ away or no From Clifford's and Northumberland's pursuit. Had he been ta'en,\
\ we should have heard the news; Had he been slain, we should have heard the\
\ news; Or had he scap'd, methinks we should have heard The happy tidings\
\ of his good escape. How fares my brother? Why is he so sad? RICHARD. I cannot\
\ joy until I be resolv'd Where our right valiant father is become. I saw\
\ him in the battle range about, And watch'd him how he singled Clifford forth.\
\ Methought he bore him in the thickest troop As doth a lion in a herd of\
\ neat;\n Or as a bear, encompass'd round with dogs,\n Who having pinch'd\
\ a few and made them cry, The rest stand all aloof and bark at him. So\
\ far'd our father with his enemies; So fled his enemies my warlike father.\
\ Methinks 'tis prize enough to be his son. See how the morning opes her\
\ golden gates And takes her farewell of the glorious sun. How well resembles\
\ it the prime of youth, Trimm'd like a younker prancing to his love! EDWARD.\
\ Dazzle mine eyes, or do I see three suns? RICHARD. Three glorious suns, each\
\ one a perfect sun; Not separated with the racking clouds, But sever'd\
\ in a pale clear-shining sky. See, see! they join, embrace, and seem to kiss,\
\ As if they vow'd some league inviolable. Now are they but one lamp, one\
\ light, one sun. In this the heaven figures some event. EDWARD. 'Tis wondrous\
\ strange, the like yet never heard of. I think it cites us, brother, to the\
\ field, That we, the sons of brave Plantagenet, Each one already blazing\
\ by our meeds, Should notwithstanding join our lights together And overshine\
\ the earth, as this the world. Whate'er it bodes, henceforward will I bear\
\ Upon my target three fair shining suns. RICHARD. Nay, bear three daughters-\
\ by your leave I speak it, You love the breeder better than the male.\n"
- "Forget that rarest treasure of your cheek,\n Exposing it- but, O, the harder\
\ heart! Alack, no remedy!- to the greedy touch Of common-kissing Titan,\
\ and forget Your laboursome and dainty trims wherein You made great Juno\
\ angry. IMOGEN. Nay, be brief; I see into thy end, and am almost A man\
\ already. PISANIO. First, make yourself but like one. Fore-thinking this,\
\ I have already fit- 'Tis in my cloak-bag- doublet, hat, hose, all That\
\ answer to them. Would you, in their serving, And with what imitation you\
\ can borrow From youth of such a season, fore noble Lucius Present yourself,\
\ desire his service, tell him Wherein you're happy- which will make him know\
\ If that his head have ear in music; doubtless With joy he will embrace\
\ you; for he's honourable, And, doubling that, most holy. Your means abroad-\
\ You have me, rich; and I will never fail Beginning nor supplyment. IMOGEN.\
\ Thou art all the comfort The gods will diet me with. Prithee away! There's\
\ more to be consider'd; but we'll even All that good time will give us. This\
\ attempt I am soldier to, and will abide it with A prince's courage. Away,\
\ I prithee. PISANIO. Well, madam, we must take a short farewell, Lest, being\
\ miss'd, I be suspected of Your carriage from the court. My noble mistress,\
\ Here is a box; I had it from the Queen. What's in't is precious. If you\
\ are sick at sea Or stomach-qualm'd at land, a dram of this\n Will drive\
\ away distemper. To some shade,\n And fit you to your manhood. May the gods\
\ Direct you to the best! IMOGEN. Amen. I thank thee. Exeunt\
\ severally\n"
- source_sentence: The excerpt showcases the emotional turmoil and sense of honor
that drives Brutus to take his own life in the face of defeat.
sentences:
- "Thou know'st that we two went to school together;\n Even for that our love\
\ of old, I prithee, Hold thou my sword-hilts, whilst I run on it. VOLUMNIUS.\
\ That's not an office for a friend, my lord. \
\ Alarum still. CLITUS. Fly, fly, my lord, there is no tarrying\
\ here. BRUTUS. Farewell to you, and you, and you, Volumnius. Strato, thou\
\ hast been all this while asleep; Farewell to thee too, Strato. Countrymen,\
\ My heart doth joy that yet in all my life I found no man but he was true\
\ to me. I shall have glory by this losing day, More than Octavius and Mark\
\ Antony By this vile conquest shall attain unto. So, fare you well at once,\
\ for Brutus' tongue Hath almost ended his life's history. Night hangs upon\
\ mine eyes, my bones would rest That have but labor'd to attain this hour.\
\ Alarum. Cry within, \"Fly, fly, fly!\" CLITUS. Fly,\
\ my lord, fly. BRUTUS. Hence! I will follow. Exeunt Clitus,\
\ Dardanius, and Volumnius. I prithee, Strato, stay thou by thy lord. Thou\
\ art a fellow of a good respect; Thy life hath had some smatch of honor in\
\ it. Hold then my sword, and turn away thy face, While I do run upon it.\
\ Wilt thou, Strato? STRATO. Give me your hand first. Fare you well, my lord.\
\ BRUTUS. Farewell, good Strato. Runs on his sword. Caesar,\
\ now be still; I kill'd not thee with half so good a will. Dies.\n\
\ Alarum. Retreat. Enter Octavius, Antony, Messala,\n Lucilius,\
\ and the Army.\n OCTAVIUS. What man is that?\n"
- "Elsinore. A room in the Castle.\nEnter King, Queen, Polonius, Ophelia, Rosencrantz,\
\ Guildenstern, and Lords. King. And can you by no drift of circumstance\n \
\ Get from him why he puts on this confusion, Grating so harshly all his days\
\ of quiet With turbulent and dangerous lunacy? Ros. He does confess he feels\
\ himself distracted, But from what cause he will by no means speak. Guil.\
\ Nor do we find him forward to be sounded, But with a crafty madness keeps\
\ aloof When we would bring him on to some confession Of his true state.\
\ Queen. Did he receive you well? Ros. Most like a gentleman. Guil. But with\
\ much forcing of his disposition. Ros. Niggard of question, but of our demands\
\ Most free in his reply. Queen. Did you assay him To any pastime? Ros.\
\ Madam, it so fell out that certain players\n We o'erraught on the way.\
\ Of these we told him,\n"
- "VII.\nThe French camp near Agincourt\nEnter the CONSTABLE OF FRANCE, the LORD\
\ RAMBURES, the DUKE OF ORLEANS,\nthe DAUPHIN, with others\n CONSTABLE. Tut!\
\ I have the best armour of the world.\n Would it were day! ORLEANS. You have\
\ an excellent armour; but let my horse have his due. CONSTABLE. It is the\
\ best horse of Europe. ORLEANS. Will it never be morning? DAUPHIN. My Lord\
\ of Orleans and my Lord High Constable, you talk of horse and armour? ORLEANS.\
\ You are as well provided of both as any prince in the world. DAUPHIN. What\
\ a long night is this! I will not change my horse with any that treads but\
\ on four pasterns. Ca, ha! he bounds from the earth as if his entrails were\
\ hairs; le cheval volant, the Pegasus, chez les narines de feu! When I bestride\
\ him I soar, I am a hawk. He trots the air; the earth sings when he touches\
\ it; the basest horn of his hoof is more musical than the pipe of Hermes.\
\ ORLEANS. He's of the colour of the nutmeg. DAUPHIN. And of the heat of the\
\ ginger. It is a beast for Perseus: he is pure air and fire; and the dull\
\ elements of earth and water never appear in him, but only in patient stillness\
\ while his rider mounts him; he is indeed a horse, and all other jades you\
\ may call beasts. CONSTABLE. Indeed, my lord, it is a most absolute and excellent\
\ horse.\n DAUPHIN. It is the prince of palfreys; his neigh is like the\n"
- source_sentence: What themes are present in the excerpt from the play?
sentences:
- "Enter TRAVERS NORTHUMBERLAND. Here comes my servant Travers, whom I sent\n \
\ On Tuesday last to listen after news. LORD BARDOLPH. My lord, I over-rode\
\ him on the way; And he is furnish'd with no certainties More than he haply\
\ may retail from me. NORTHUMBERLAND. Now, Travers, what good tidings comes with\
\ you? TRAVERS. My lord, Sir John Umfrevile turn'd me back With joyful tidings;\
\ and, being better hors'd, Out-rode me. After him came spurring hard A\
\ gentleman, almost forspent with speed, That stopp'd by me to breathe his\
\ bloodied horse. He ask'd the way to Chester; and of him I did demand what\
\ news from Shrewsbury. He told me that rebellion had bad luck, And that\
\ young Harry Percy's spur was cold. With that he gave his able horse the\
\ head And, bending forward, struck his armed heels\n Against the panting\
\ sides of his poor jade\n Up to the rowel-head; and starting so, He seem'd\
\ in running to devour the way, Staying no longer question. NORTHUMBERLAND.\
\ Ha! Again: Said he young Harry Percy's spur was cold? Of Hotspur, Coldspur?\
\ that rebellion Had met ill luck? LORD BARDOLPH. My lord, I'll tell you what:\
\ If my young lord your son have not the day, Upon mine honour, for a silken\
\ point I'll give my barony. Never talk of it. NORTHUMBERLAND. Why should\
\ that gentleman that rode by Travers Give then such instances of loss? LORD\
\ BARDOLPH. Who- he? He was some hilding fellow that had stol'n The horse\
\ he rode on and, upon my life, Spoke at a venture. Look, here comes more news.\
\ \n Enter Morton NORTHUMBERLAND. Yea, this man's brow,\
\ like to a title-leaf,\n"
- "ANTONY. Yet they are not join'd. Where yond pine does stand\n I shall discover\
\ all. I'll bring thee word Straight how 'tis like to go. \
\ Exit SCARUS. Swallows have built In Cleopatra's sails their nests.\
\ The augurers Say they know not, they cannot tell; look grimly, And dare\
\ not speak their knowledge. Antony Is valiant and dejected; and by starts\
\ His fretted fortunes give him hope and fear Of what he has and has not.\
\ [Alarum afar off, as at a sea-fight]\n \
\ Re-enter ANTONY ANTONY. All is lost!\n This foul Egyptian hath\
\ betrayed me. My fleet hath yielded to the foe, and yonder They cast\
\ their caps up and carouse together Like friends long lost. Triple-turn'd\
\ whore! 'tis thou\n Hast sold me to this novice; and my heart\n Makes\
\ only wars on thee. Bid them all fly; For when I am reveng'd upon my charm,\
\ I have done all. Bid them all fly; begone. Exit SCARUS O sun, thy\
\ uprise shall I see no more! Fortune and Antony part here; even here Do\
\ we shake hands. All come to this? The hearts That spaniel'd me at heels,\
\ to whom I gave Their wishes, do discandy, melt their sweets On blossoming\
\ Caesar; and this pine is bark'd That overtopp'd them all. Betray'd I am.\
\ O this false soul of Egypt! this grave charm- Whose eye beck'd forth my\
\ wars and call'd them home, Whose bosom was my crownet, my chief end- Like\
\ a right gypsy hath at fast and loose Beguil'd me to the very heart of loss.\
\ What, Eros, Eros! Enter CLEOPATRA\n Ah, thou spell!\
\ Avaunt!\n"
- "TALBOT. Saint George and victory! Fight, soldiers, fight.\n The Regent hath\
\ with Talbot broke his word And left us to the rage of France his sword. \
\ Where is John Talbot? Pause and take thy breath; I gave thee life and rescu'd\
\ thee from death. JOHN. O, twice my father, twice am I thy son! The life\
\ thou gav'st me first was lost and done Till with thy warlike sword, despite\
\ of fate, To my determin'd time thou gav'st new date. TALBOT. When from the\
\ Dauphin's crest thy sword struck fire, It warm'd thy father's heart with\
\ proud desire Of bold-fac'd victory. Then leaden age, Quicken'd with youthful\
\ spleen and warlike rage, Beat down Alencon, Orleans, Burgundy, And from\
\ the pride of Gallia rescued thee. The ireful bastard Orleans, that drew blood\
\ From thee, my boy, and had the maidenhood Of thy first fight, I soon encountered\
\ And, interchanging blows, I quickly shed Some of his bastard blood; and\
\ in disgrace\n Bespoke him thus: 'Contaminated, base,\n"
- source_sentence: What is the significance of the tennis balls in the excerpt from
the play?
sentences:
- "My fault is past. But, O, what form of prayer\n Can serve my turn? 'Forgive\
\ me my foul murther'? That cannot be; since I am still possess'd Of those\
\ effects for which I did the murther- My crown, mine own ambition, and my\
\ queen. May one be pardon'd and retain th' offence? In the corrupted currents\
\ of this world Offence's gilded hand may shove by justice, And oft 'tis\
\ seen the wicked prize itself Buys out the law; but 'tis not so above. \
\ There is no shuffling; there the action lies In his true nature, and we ourselves\
\ compell'd, Even to the teeth and forehead of our faults, To give in evidence.\
\ What then? What rests? Try what repentance can. What can it not? Yet what\
\ can it when one cannot repent? O wretched state! O bosom black as death!\
\ O limed soul, that, struggling to be free, Art more engag'd! Help, angels!\
\ Make assay. Bow, stubborn knees; and heart with strings of steel, Be\
\ soft as sinews of the new-born babe! All may be well. \
\ He kneels.\n Enter Hamlet. Ham. Now might\
\ I do it pat, now he is praying;\n And now I'll do't. And so he goes to heaven,\
\ And so am I reveng'd. That would be scann'd. A villain kills my father;\
\ and for that, I, his sole son, do this same villain send To heaven. \
\ Why, this is hire and salary, not revenge! He took my father grossly, full\
\ of bread, With all his crimes broad blown, as flush as May; And how his\
\ audit stands, who knows save heaven?\n But in our circumstance and course\
\ of thought,\n"
- "YORK. From Ireland thus comes York to claim his right\n And pluck the crown\
\ from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright,\
\ To entertain great England's lawful king. Ah, sancta majestas! who would\
\ not buy thee dear? Let them obey that knows not how to rule; This hand\
\ was made to handle nought but gold. I cannot give due action to my words\
\ Except a sword or sceptre balance it.\n A sceptre shall it have, have\
\ I a soul\n On which I'll toss the flower-de-luce of France.\n \
\ Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb\
\ me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York,\
\ if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept\
\ thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger\
\ from Henry, our dread liege, To know the reason of these arms in peace; \
\ Or why thou, being a subject as I am, Against thy oath and true allegiance\
\ sworn, Should raise so great a power without his leave, Or dare to bring\
\ thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is\
\ so great. O, I could hew up rocks and fight with flint, I am so angry\
\ at these abject terms; And now, like Ajax Telamonius, On sheep or oxen\
\ could I spend my fury. I am far better born than is the King, More like\
\ a king, more kingly in my thoughts; But I must make fair weather yet awhile,\
\ Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon\
\ me That I have given no answer all this while; My mind was troubled with\
\ deep melancholy. The cause why I have brought this army hither Is to\
\ remove proud Somerset from the King, Seditious to his Grace and to the state.\
\ BUCKINGHAM. That is too much presumption on thy part; But if thy arms be\
\ to no other end, The King hath yielded unto thy demand:\n The Duke of\
\ Somerset is in the Tower.\n"
- "Says that you savour too much of your youth,\n And bids you be advis'd there's\
\ nought in France That can be with a nimble galliard won; You cannot revel\
\ into dukedoms there. He therefore sends you, meeter for your spirit, This\
\ tun of treasure; and, in lieu of this, Desires you let the dukedoms that\
\ you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What\
\ treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the\
\ Dauphin is so pleasant with us; His present and your pains we thank you for.\
\ When we have match'd our rackets to these balls, We will in France,\
\ by God's grace, play a set Shall strike his father's crown into the hazard.\
\ Tell him he hath made a match with such a wrangler That all the courts\
\ of France will be disturb'd With chaces. And we understand him well, How\
\ he comes o'er us with our wilder days, Not measuring what use we made of\
\ them. We never valu'd this poor seat of England; And therefore, living\
\ hence, did give ourself To barbarous licence; as 'tis ever common That\
\ men are merriest when they are from home. But tell the Dauphin I will keep\
\ my state, Be like a king, and show my sail of greatness, When I do rouse\
\ me in my throne of France; For that I have laid by my majesty And plodded\
\ like a man for working-days; But I will rise there with so full a glory \
\ That I will dazzle all the eyes of France, Yea, strike the Dauphin blind\
\ to look on us. And tell the pleasant Prince this mock of his Hath turn'd\
\ his balls to gun-stones, and his soul Shall stand sore charged for the wasteful\
\ vengeance\n That shall fly with them; for many a thousand widows\n"
model-index:
- name: RAG_general/rerank/models/thenlper-gte-base-ft
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: context dev
type: context-dev
metrics:
- type: cosine_accuracy@3
value: 0.5095569070373588
name: Cosine Accuracy@3
- type: cosine_precision@1
value: 0.394874022589053
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.16985230234578627
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.11059947871416159
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.060338835794960896
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.394874022589053
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5095569070373588
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.552997393570808
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.603388357949609
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4969009218325175
name: Cosine Ndcg@10
- type: cosine_mrr@200
value: 0.46919455106379765
name: Cosine Mrr@200
- type: cosine_map@100
value: 0.4689011726803316
name: Cosine Map@100
- type: dot_accuracy@3
value: 0.5095569070373588
name: Dot Accuracy@3
- type: dot_precision@1
value: 0.394874022589053
name: Dot Precision@1
- type: dot_precision@3
value: 0.16985230234578627
name: Dot Precision@3
- type: dot_precision@5
value: 0.11059947871416159
name: Dot Precision@5
- type: dot_precision@10
value: 0.060338835794960896
name: Dot Precision@10
- type: dot_recall@1
value: 0.394874022589053
name: Dot Recall@1
- type: dot_recall@3
value: 0.5095569070373588
name: Dot Recall@3
- type: dot_recall@5
value: 0.552997393570808
name: Dot Recall@5
- type: dot_recall@10
value: 0.603388357949609
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.4969009218325175
name: Dot Ndcg@10
- type: dot_mrr@200
value: 0.46919455106379765
name: Dot Mrr@200
- type: dot_map@100
value: 0.4689011726803316
name: Dot Map@100
---
# RAG_general/rerank/models/thenlper-gte-base-ft
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [thenlper/gte-base](https://huggingface.co/thenlper/gte-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [thenlper/gte-base](https://huggingface.co/thenlper/gte-base) <!-- at revision 5e95d41db6721e7cbd5006e99c7508f0083223d6 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("rjnClarke/thenlper-gte-base-fine-tuned")
# Run inference
sentences = [
'What is the significance of the tennis balls in the excerpt from the play?',
"Says that you savour too much of your youth,\n And bids you be advis'd there's nought in France That can be with a nimble galliard won; You cannot revel into dukedoms there. He therefore sends you, meeter for your spirit, This tun of treasure; and, in lieu of this, Desires you let the dukedoms that you claim Hear no more of you. This the Dauphin speaks. KING HENRY. What treasure, uncle? EXETER. Tennis-balls, my liege. KING HENRY. We are glad the Dauphin is so pleasant with us; His present and your pains we thank you for. When we have match'd our rackets to these balls, We will in France, by God's grace, play a set Shall strike his father's crown into the hazard. Tell him he hath made a match with such a wrangler That all the courts of France will be disturb'd With chaces. And we understand him well, How he comes o'er us with our wilder days, Not measuring what use we made of them. We never valu'd this poor seat of England; And therefore, living hence, did give ourself To barbarous licence; as 'tis ever common That men are merriest when they are from home. But tell the Dauphin I will keep my state, Be like a king, and show my sail of greatness, When I do rouse me in my throne of France; For that I have laid by my majesty And plodded like a man for working-days; But I will rise there with so full a glory That I will dazzle all the eyes of France, Yea, strike the Dauphin blind to look on us. And tell the pleasant Prince this mock of his Hath turn'd his balls to gun-stones, and his soul Shall stand sore charged for the wasteful vengeance\n That shall fly with them; for many a thousand widows\n",
"YORK. From Ireland thus comes York to claim his right\n And pluck the crown from feeble Henry's head: Ring bells aloud, burn bonfires clear and bright, To entertain great England's lawful king. Ah, sancta majestas! who would not buy thee dear? Let them obey that knows not how to rule; This hand was made to handle nought but gold. I cannot give due action to my words Except a sword or sceptre balance it.\n A sceptre shall it have, have I a soul\n On which I'll toss the flower-de-luce of France.\n Enter BUCKINGHAM [Aside] Whom have we here? Buckingham, to disturb me?\n The King hath sent him, sure: I must dissemble. BUCKINGHAM. York, if thou meanest well I greet thee well. YORK. Humphrey of Buckingham, I accept thy greeting. Art thou a messenger, or come of pleasure? BUCKINGHAM. A messenger from Henry, our dread liege, To know the reason of these arms in peace; Or why thou, being a subject as I am, Against thy oath and true allegiance sworn, Should raise so great a power without his leave, Or dare to bring thy force so near the court. YORK. [Aside] Scarce can I speak, my choler is so great. O, I could hew up rocks and fight with flint, I am so angry at these abject terms; And now, like Ajax Telamonius, On sheep or oxen could I spend my fury. I am far better born than is the King, More like a king, more kingly in my thoughts; But I must make fair weather yet awhile, Till Henry be more weak and I more strong.- Buckingham, I prithee, pardon me That I have given no answer all this while; My mind was troubled with deep melancholy. The cause why I have brought this army hither Is to remove proud Somerset from the King, Seditious to his Grace and to the state. BUCKINGHAM. That is too much presumption on thy part; But if thy arms be to no other end, The King hath yielded unto thy demand:\n The Duke of Somerset is in the Tower.\n",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `context-dev`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@3 | 0.5096 |
| cosine_precision@1 | 0.3949 |
| cosine_precision@3 | 0.1699 |
| cosine_precision@5 | 0.1106 |
| cosine_precision@10 | 0.0603 |
| cosine_recall@1 | 0.3949 |
| cosine_recall@3 | 0.5096 |
| cosine_recall@5 | 0.553 |
| cosine_recall@10 | 0.6034 |
| cosine_ndcg@10 | 0.4969 |
| cosine_mrr@200 | 0.4692 |
| **cosine_map@100** | **0.4689** |
| dot_accuracy@3 | 0.5096 |
| dot_precision@1 | 0.3949 |
| dot_precision@3 | 0.1699 |
| dot_precision@5 | 0.1106 |
| dot_precision@10 | 0.0603 |
| dot_recall@1 | 0.3949 |
| dot_recall@3 | 0.5096 |
| dot_recall@5 | 0.553 |
| dot_recall@10 | 0.6034 |
| dot_ndcg@10 | 0.4969 |
| dot_mrr@200 | 0.4692 |
| dot_map@100 | 0.4689 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 10,359 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 22.32 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 35 tokens</li><li>mean: 351.19 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Who is the general being described in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>What is the main conflict highlighted in the excerpt?</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
| <code>The excerpt showcases the tension between Antony's loyalty to Cleopatra and his obligations to Caesar, as well as Cleopatra's influence over him.</code> | <code>PHILO. Nay, but this dotage of our general's<br> O'erflows the measure. Those his goodly eyes, That o'er the files and musters of the war Have glow'd like plated Mars, now bend, now turn, The office and devotion of their view Upon a tawny front. His captain's heart, Which in the scuffles of great fights hath burst<br> The buckles on his breast, reneges all temper,<br> And is become the bellows and the fan To cool a gipsy's lust.<br> Flourish. Enter ANTONY, CLEOPATRA, her LADIES, the train,<br> with eunuchs fanning her<br> Look where they come!<br> Take but good note, and you shall see in him The triple pillar of the world transform'd Into a strumpet's fool. Behold and see. CLEOPATRA. If it be love indeed, tell me how much. ANTONY. There's beggary in the love that can be reckon'd. CLEOPATRA. I'll set a bourn how far to be belov'd. ANTONY. Then must thou needs find out new heaven, new earth.<br> Enter a MESSENGER MESSENGER. News, my good lord, from Rome.<br> ANTONY. Grates me the sum. CLEOPATRA. Nay, hear them, Antony. Fulvia perchance is angry; or who knows If the scarce-bearded Caesar have not sent His pow'rful mandate to you: 'Do this or this; Take in that kingdom and enfranchise that; Perform't, or else we damn thee.' ANTONY. How, my love? CLEOPATRA. Perchance? Nay, and most like, You must not stay here longer; your dismission Is come from Caesar; therefore hear it, Antony. Where's Fulvia's process? Caesar's I would say? Both? Call in the messengers. As I am Egypt's Queen, Thou blushest, Antony, and that blood of thine Is Caesar's homager. Else so thy cheek pays shame<br> When shrill-tongu'd Fulvia scolds. The messengers!<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,302 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 21.73 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 354.59 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>The excerpt highlights the tension between Antony's loyalty to Cleopatra and his standing in Rome, showcasing the intricate balance of power and love in the play.</code> | <code>When shrill-tongu'd Fulvia scolds. The messengers!<br> ANTONY. Let Rome in Tiber melt, and the wide arch Of the rang'd empire fall! Here is my space. Kingdoms are clay; our dungy earth alike Feeds beast as man. The nobleness of life Is to do thus [emhracing], when such a mutual pair And such a twain can do't, in which I bind, On pain of punishment, the world to weet We stand up peerless. CLEOPATRA. Excellent falsehood! Why did he marry Fulvia, and not love her? I'll seem the fool I am not. Antony Will be himself. ANTONY. But stirr'd by Cleopatra. Now for the love of Love and her soft hours, Let's not confound the time with conference harsh; There's not a minute of our lives should stretch Without some pleasure now. What sport to-night? CLEOPATRA. Hear the ambassadors. ANTONY. Fie, wrangling queen! Whom everything becomes- to chide, to laugh, To weep; whose every passion fully strives To make itself in thee fair and admir'd. No messenger but thine, and all alone To-night we'll wander through the streets and note The qualities of people. Come, my queen; Last night you did desire it. Speak not to us. Exeunt ANTONY and CLEOPATRA, with the train DEMETRIUS. Is Caesar with Antonius priz'd so slight? PHILO. Sir, sometimes when he is not Antony, He comes too short of that great property Which still should go with Antony. DEMETRIUS. I am full sorry That he approves the common liar, who Thus speaks of him at Rome; but I will hope<br> Of better deeds to-morrow. Rest you happy! Exeunt<br></code> |
| <code>What is the significance of the soothsayer in the context of the play?</code> | <code>CHARMIAN. Lord Alexas, sweet Alexas, most anything Alexas, almost<br> most absolute Alexas, where's the soothsayer that you prais'd so to th' Queen? O that I knew this husband, which you say must charge his horns with garlands! ALEXAS. Soothsayer! SOOTHSAYER. Your will? CHARMIAN. Is this the man? Is't you, sir, that know things? SOOTHSAYER. In nature's infinite book of secrecy A little I can read. ALEXAS. Show him your hand.<br> Enter ENOBARBUS ENOBARBUS. Bring in the banquet quickly; wine enough<br> Cleopatra's health to drink. CHARMIAN. Good, sir, give me good fortune. SOOTHSAYER. I make not, but foresee. CHARMIAN. Pray, then, foresee me one. SOOTHSAYER. You shall be yet far fairer than you are. CHARMIAN. He means in flesh. IRAS. No, you shall paint when you are old. CHARMIAN. Wrinkles forbid! ALEXAS. Vex not his prescience; be attentive. CHARMIAN. Hush!<br> SOOTHSAYER. You shall be more beloving than beloved.<br></code> |
| <code>What is the setting of the scene in which the excerpt takes place?</code> | <code>sweet Isis, I beseech thee! And let her die too, and give him a<br> worse! And let worse follow worse, till the worst of all follow him laughing to his grave, fiftyfold a cuckold! Good Isis, hear me this prayer, though thou deny me a matter of more weight; good Isis, I beseech thee! IRAS. Amen. Dear goddess, hear that prayer of the people! For, as it is a heartbreaking to see a handsome man loose-wiv'd, so it is a deadly sorrow to behold a foul knave uncuckolded. Therefore, dear Isis, keep decorum, and fortune him accordingly! CHARMIAN. Amen. ALEXAS. Lo now, if it lay in their hands to make me a cuckold, they would make themselves whores but they'ld do't!<br> Enter CLEOPATRA ENOBARBUS. Hush! Here comes Antony.<br> CHARMIAN. Not he; the Queen. CLEOPATRA. Saw you my lord? ENOBARBUS. No, lady. CLEOPATRA. Was he not here? CHARMIAN. No, madam. CLEOPATRA. He was dispos'd to mirth; but on the sudden A Roman thought hath struck him. Enobarbus! ENOBARBUS. Madam? CLEOPATRA. Seek him, and bring him hither. Where's Alexas? ALEXAS. Here, at your service. My lord approaches.<br> Enter ANTONY, with a MESSENGER and attendants CLEOPATRA. We will not look upon him. Go with us.<br> Exeunt CLEOPATRA, ENOBARBUS, and the rest MESSENGER. Fulvia thy wife first came into the field. ANTONY. Against my brother Lucius? MESSENGER. Ay.<br> But soon that war had end, and the time's state<br></code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 3e-05
- `num_train_epochs`: 7
- `warmup_steps`: 50
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 7
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 50
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | loss | context-dev_cosine_map@100 |
|:-------:|:-------:|:-------------:|:----------:|:--------------------------:|
| 1.0 | 324 | - | 1.6708 | 0.4417 |
| 1.5432 | 500 | 1.9498 | - | - |
| 2.0 | 648 | - | 1.5636 | 0.4688 |
| **3.0** | **972** | **-** | **1.5743** | **0.4689** |
| 3.0864 | 1000 | 1.1069 | - | - |
| 4.0 | 1296 | - | 1.5924 | 0.4655 |
| 4.6296 | 1500 | 0.7121 | - | - |
| 5.0 | 1620 | - | 1.6213 | 0.4621 |
| 6.0 | 1944 | - | 1.6450 | 0.4603 |
| 6.1728 | 2000 | 0.5308 | - | - |
| 7.0 | 2268 | - | 1.6664 | 0.4689 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.43.4
- PyTorch: 2.3.1+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
consciousAI/cai-lunaris-text-embeddings | consciousAI | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"mteb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-06-22T18:08:54 | 2023-06-22T21:33:52 | 46 | 4 | ---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- mteb
model-index:
- name: cai-lunaris-text-embeddings
results:
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 17.07
- type: map_at_10
value: 29.372999999999998
- type: map_at_100
value: 30.79
- type: map_at_1000
value: 30.819999999999997
- type: map_at_3
value: 24.395
- type: map_at_5
value: 27.137
- type: mrr_at_1
value: 17.923000000000002
- type: mrr_at_10
value: 29.695
- type: mrr_at_100
value: 31.098
- type: mrr_at_1000
value: 31.128
- type: mrr_at_3
value: 24.704
- type: mrr_at_5
value: 27.449
- type: ndcg_at_1
value: 17.07
- type: ndcg_at_10
value: 37.269000000000005
- type: ndcg_at_100
value: 43.716
- type: ndcg_at_1000
value: 44.531
- type: ndcg_at_3
value: 26.839000000000002
- type: ndcg_at_5
value: 31.845000000000002
- type: precision_at_1
value: 17.07
- type: precision_at_10
value: 6.3020000000000005
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 11.309
- type: precision_at_5
value: 9.246
- type: recall_at_1
value: 17.07
- type: recall_at_10
value: 63.016000000000005
- type: recall_at_100
value: 92.24799999999999
- type: recall_at_1000
value: 98.72
- type: recall_at_3
value: 33.926
- type: recall_at_5
value: 46.23
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 53.44266265900711
- type: mrr
value: 66.54695950402322
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 75.9652953730204
- type: cos_sim_spearman
value: 73.96554077670989
- type: euclidean_pearson
value: 75.68477255792381
- type: euclidean_spearman
value: 74.59447076995703
- type: manhattan_pearson
value: 75.94984623881341
- type: manhattan_spearman
value: 74.72218452337502
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 14.119000000000002
- type: map_at_10
value: 19.661
- type: map_at_100
value: 20.706
- type: map_at_1000
value: 20.848
- type: map_at_3
value: 17.759
- type: map_at_5
value: 18.645
- type: mrr_at_1
value: 17.166999999999998
- type: mrr_at_10
value: 23.313
- type: mrr_at_100
value: 24.263
- type: mrr_at_1000
value: 24.352999999999998
- type: mrr_at_3
value: 21.412
- type: mrr_at_5
value: 22.313
- type: ndcg_at_1
value: 17.166999999999998
- type: ndcg_at_10
value: 23.631
- type: ndcg_at_100
value: 28.427000000000003
- type: ndcg_at_1000
value: 31.862000000000002
- type: ndcg_at_3
value: 20.175
- type: ndcg_at_5
value: 21.397
- type: precision_at_1
value: 17.166999999999998
- type: precision_at_10
value: 4.549
- type: precision_at_100
value: 0.8370000000000001
- type: precision_at_1000
value: 0.136
- type: precision_at_3
value: 9.68
- type: precision_at_5
value: 6.981
- type: recall_at_1
value: 14.119000000000002
- type: recall_at_10
value: 32.147999999999996
- type: recall_at_100
value: 52.739999999999995
- type: recall_at_1000
value: 76.67
- type: recall_at_3
value: 22.019
- type: recall_at_5
value: 25.361
- type: map_at_1
value: 16.576
- type: map_at_10
value: 22.281000000000002
- type: map_at_100
value: 23.066
- type: map_at_1000
value: 23.166
- type: map_at_3
value: 20.385
- type: map_at_5
value: 21.557000000000002
- type: mrr_at_1
value: 20.892
- type: mrr_at_10
value: 26.605
- type: mrr_at_100
value: 27.229
- type: mrr_at_1000
value: 27.296
- type: mrr_at_3
value: 24.809
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 20.892
- type: ndcg_at_10
value: 26.092
- type: ndcg_at_100
value: 29.398999999999997
- type: ndcg_at_1000
value: 31.884
- type: ndcg_at_3
value: 23.032
- type: ndcg_at_5
value: 24.634
- type: precision_at_1
value: 20.892
- type: precision_at_10
value: 4.885
- type: precision_at_100
value: 0.818
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 10.977
- type: precision_at_5
value: 8.013
- type: recall_at_1
value: 16.576
- type: recall_at_10
value: 32.945
- type: recall_at_100
value: 47.337
- type: recall_at_1000
value: 64.592
- type: recall_at_3
value: 24.053
- type: recall_at_5
value: 28.465
- type: map_at_1
value: 20.604
- type: map_at_10
value: 28.754999999999995
- type: map_at_100
value: 29.767
- type: map_at_1000
value: 29.852
- type: map_at_3
value: 26.268
- type: map_at_5
value: 27.559
- type: mrr_at_1
value: 24.326
- type: mrr_at_10
value: 31.602000000000004
- type: mrr_at_100
value: 32.46
- type: mrr_at_1000
value: 32.521
- type: mrr_at_3
value: 29.415000000000003
- type: mrr_at_5
value: 30.581000000000003
- type: ndcg_at_1
value: 24.326
- type: ndcg_at_10
value: 33.335
- type: ndcg_at_100
value: 38.086
- type: ndcg_at_1000
value: 40.319
- type: ndcg_at_3
value: 28.796
- type: ndcg_at_5
value: 30.758999999999997
- type: precision_at_1
value: 24.326
- type: precision_at_10
value: 5.712
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 13.208
- type: precision_at_5
value: 9.329
- type: recall_at_1
value: 20.604
- type: recall_at_10
value: 44.505
- type: recall_at_100
value: 65.866
- type: recall_at_1000
value: 82.61800000000001
- type: recall_at_3
value: 31.794
- type: recall_at_5
value: 36.831
- type: map_at_1
value: 8.280999999999999
- type: map_at_10
value: 11.636000000000001
- type: map_at_100
value: 12.363
- type: map_at_1000
value: 12.469
- type: map_at_3
value: 10.415000000000001
- type: map_at_5
value: 11.144
- type: mrr_at_1
value: 9.266
- type: mrr_at_10
value: 12.838
- type: mrr_at_100
value: 13.608999999999998
- type: mrr_at_1000
value: 13.700999999999999
- type: mrr_at_3
value: 11.507000000000001
- type: mrr_at_5
value: 12.343
- type: ndcg_at_1
value: 9.266
- type: ndcg_at_10
value: 13.877
- type: ndcg_at_100
value: 18.119
- type: ndcg_at_1000
value: 21.247
- type: ndcg_at_3
value: 11.376999999999999
- type: ndcg_at_5
value: 12.675
- type: precision_at_1
value: 9.266
- type: precision_at_10
value: 2.226
- type: precision_at_100
value: 0.47200000000000003
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 4.859
- type: precision_at_5
value: 3.6380000000000003
- type: recall_at_1
value: 8.280999999999999
- type: recall_at_10
value: 19.872999999999998
- type: recall_at_100
value: 40.585
- type: recall_at_1000
value: 65.225
- type: recall_at_3
value: 13.014000000000001
- type: recall_at_5
value: 16.147
- type: map_at_1
value: 4.1209999999999996
- type: map_at_10
value: 7.272
- type: map_at_100
value: 8.079
- type: map_at_1000
value: 8.199
- type: map_at_3
value: 6.212
- type: map_at_5
value: 6.736000000000001
- type: mrr_at_1
value: 5.721
- type: mrr_at_10
value: 9.418
- type: mrr_at_100
value: 10.281
- type: mrr_at_1000
value: 10.385
- type: mrr_at_3
value: 8.126
- type: mrr_at_5
value: 8.779
- type: ndcg_at_1
value: 5.721
- type: ndcg_at_10
value: 9.673
- type: ndcg_at_100
value: 13.852999999999998
- type: ndcg_at_1000
value: 17.546999999999997
- type: ndcg_at_3
value: 7.509
- type: ndcg_at_5
value: 8.373
- type: precision_at_1
value: 5.721
- type: precision_at_10
value: 2.04
- type: precision_at_100
value: 0.48
- type: precision_at_1000
value: 0.093
- type: precision_at_3
value: 4.022
- type: precision_at_5
value: 3.06
- type: recall_at_1
value: 4.1209999999999996
- type: recall_at_10
value: 15.201
- type: recall_at_100
value: 33.922999999999995
- type: recall_at_1000
value: 61.529999999999994
- type: recall_at_3
value: 8.869
- type: recall_at_5
value: 11.257
- type: map_at_1
value: 14.09
- type: map_at_10
value: 19.573999999999998
- type: map_at_100
value: 20.580000000000002
- type: map_at_1000
value: 20.704
- type: map_at_3
value: 17.68
- type: map_at_5
value: 18.64
- type: mrr_at_1
value: 17.227999999999998
- type: mrr_at_10
value: 23.152
- type: mrr_at_100
value: 24.056
- type: mrr_at_1000
value: 24.141000000000002
- type: mrr_at_3
value: 21.142
- type: mrr_at_5
value: 22.201
- type: ndcg_at_1
value: 17.227999999999998
- type: ndcg_at_10
value: 23.39
- type: ndcg_at_100
value: 28.483999999999998
- type: ndcg_at_1000
value: 31.709
- type: ndcg_at_3
value: 19.883
- type: ndcg_at_5
value: 21.34
- type: precision_at_1
value: 17.227999999999998
- type: precision_at_10
value: 4.3790000000000004
- type: precision_at_100
value: 0.826
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 9.496
- type: precision_at_5
value: 6.872
- type: recall_at_1
value: 14.09
- type: recall_at_10
value: 31.580000000000002
- type: recall_at_100
value: 54.074
- type: recall_at_1000
value: 77.092
- type: recall_at_3
value: 21.601
- type: recall_at_5
value: 25.333
- type: map_at_1
value: 10.538
- type: map_at_10
value: 15.75
- type: map_at_100
value: 16.71
- type: map_at_1000
value: 16.838
- type: map_at_3
value: 13.488
- type: map_at_5
value: 14.712
- type: mrr_at_1
value: 13.813
- type: mrr_at_10
value: 19.08
- type: mrr_at_100
value: 19.946
- type: mrr_at_1000
value: 20.044
- type: mrr_at_3
value: 16.838
- type: mrr_at_5
value: 17.951
- type: ndcg_at_1
value: 13.813
- type: ndcg_at_10
value: 19.669
- type: ndcg_at_100
value: 24.488
- type: ndcg_at_1000
value: 27.87
- type: ndcg_at_3
value: 15.479000000000001
- type: ndcg_at_5
value: 17.229
- type: precision_at_1
value: 13.813
- type: precision_at_10
value: 3.916
- type: precision_at_100
value: 0.743
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 7.534000000000001
- type: precision_at_5
value: 5.822
- type: recall_at_1
value: 10.538
- type: recall_at_10
value: 28.693
- type: recall_at_100
value: 50.308
- type: recall_at_1000
value: 74.44
- type: recall_at_3
value: 16.866999999999997
- type: recall_at_5
value: 21.404999999999998
- type: map_at_1
value: 11.044583333333332
- type: map_at_10
value: 15.682833333333335
- type: map_at_100
value: 16.506500000000003
- type: map_at_1000
value: 16.623833333333334
- type: map_at_3
value: 14.130833333333333
- type: map_at_5
value: 14.963583333333332
- type: mrr_at_1
value: 13.482833333333332
- type: mrr_at_10
value: 18.328500000000002
- type: mrr_at_100
value: 19.095416666666665
- type: mrr_at_1000
value: 19.18241666666666
- type: mrr_at_3
value: 16.754749999999998
- type: mrr_at_5
value: 17.614749999999997
- type: ndcg_at_1
value: 13.482833333333332
- type: ndcg_at_10
value: 18.81491666666667
- type: ndcg_at_100
value: 22.946833333333334
- type: ndcg_at_1000
value: 26.061083333333336
- type: ndcg_at_3
value: 15.949333333333332
- type: ndcg_at_5
value: 17.218333333333334
- type: precision_at_1
value: 13.482833333333332
- type: precision_at_10
value: 3.456583333333333
- type: precision_at_100
value: 0.6599166666666666
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 7.498833333333332
- type: precision_at_5
value: 5.477166666666667
- type: recall_at_1
value: 11.044583333333332
- type: recall_at_10
value: 25.737750000000005
- type: recall_at_100
value: 44.617916666666666
- type: recall_at_1000
value: 67.56524999999999
- type: recall_at_3
value: 17.598249999999997
- type: recall_at_5
value: 20.9035
- type: map_at_1
value: 9.362
- type: map_at_10
value: 13.414000000000001
- type: map_at_100
value: 14.083000000000002
- type: map_at_1000
value: 14.168
- type: map_at_3
value: 12.098
- type: map_at_5
value: 12.803999999999998
- type: mrr_at_1
value: 11.043
- type: mrr_at_10
value: 15.158
- type: mrr_at_100
value: 15.845999999999998
- type: mrr_at_1000
value: 15.916
- type: mrr_at_3
value: 13.88
- type: mrr_at_5
value: 14.601
- type: ndcg_at_1
value: 11.043
- type: ndcg_at_10
value: 16.034000000000002
- type: ndcg_at_100
value: 19.686
- type: ndcg_at_1000
value: 22.188
- type: ndcg_at_3
value: 13.530000000000001
- type: ndcg_at_5
value: 14.704
- type: precision_at_1
value: 11.043
- type: precision_at_10
value: 2.791
- type: precision_at_100
value: 0.5
- type: precision_at_1000
value: 0.077
- type: precision_at_3
value: 6.237
- type: precision_at_5
value: 4.5089999999999995
- type: recall_at_1
value: 9.362
- type: recall_at_10
value: 22.396
- type: recall_at_100
value: 39.528999999999996
- type: recall_at_1000
value: 58.809
- type: recall_at_3
value: 15.553
- type: recall_at_5
value: 18.512
- type: map_at_1
value: 5.657
- type: map_at_10
value: 8.273
- type: map_at_100
value: 8.875
- type: map_at_1000
value: 8.977
- type: map_at_3
value: 7.32
- type: map_at_5
value: 7.792000000000001
- type: mrr_at_1
value: 7.02
- type: mrr_at_10
value: 9.966999999999999
- type: mrr_at_100
value: 10.636
- type: mrr_at_1000
value: 10.724
- type: mrr_at_3
value: 8.872
- type: mrr_at_5
value: 9.461
- type: ndcg_at_1
value: 7.02
- type: ndcg_at_10
value: 10.199
- type: ndcg_at_100
value: 13.642000000000001
- type: ndcg_at_1000
value: 16.643
- type: ndcg_at_3
value: 8.333
- type: ndcg_at_5
value: 9.103
- type: precision_at_1
value: 7.02
- type: precision_at_10
value: 1.8929999999999998
- type: precision_at_100
value: 0.43
- type: precision_at_1000
value: 0.08099999999999999
- type: precision_at_3
value: 3.843
- type: precision_at_5
value: 2.884
- type: recall_at_1
value: 5.657
- type: recall_at_10
value: 14.563
- type: recall_at_100
value: 30.807000000000002
- type: recall_at_1000
value: 53.251000000000005
- type: recall_at_3
value: 9.272
- type: recall_at_5
value: 11.202
- type: map_at_1
value: 10.671999999999999
- type: map_at_10
value: 14.651
- type: map_at_100
value: 15.406
- type: map_at_1000
value: 15.525
- type: map_at_3
value: 13.461
- type: map_at_5
value: 14.163
- type: mrr_at_1
value: 12.407
- type: mrr_at_10
value: 16.782
- type: mrr_at_100
value: 17.562
- type: mrr_at_1000
value: 17.653
- type: mrr_at_3
value: 15.47
- type: mrr_at_5
value: 16.262
- type: ndcg_at_1
value: 12.407
- type: ndcg_at_10
value: 17.251
- type: ndcg_at_100
value: 21.378
- type: ndcg_at_1000
value: 24.689
- type: ndcg_at_3
value: 14.915000000000001
- type: ndcg_at_5
value: 16.1
- type: precision_at_1
value: 12.407
- type: precision_at_10
value: 2.91
- type: precision_at_100
value: 0.573
- type: precision_at_1000
value: 0.096
- type: precision_at_3
value: 6.779
- type: precision_at_5
value: 4.888
- type: recall_at_1
value: 10.671999999999999
- type: recall_at_10
value: 23.099
- type: recall_at_100
value: 41.937999999999995
- type: recall_at_1000
value: 66.495
- type: recall_at_3
value: 16.901
- type: recall_at_5
value: 19.807
- type: map_at_1
value: 13.364
- type: map_at_10
value: 17.772
- type: map_at_100
value: 18.659
- type: map_at_1000
value: 18.861
- type: map_at_3
value: 16.659
- type: map_at_5
value: 17.174
- type: mrr_at_1
value: 16.996
- type: mrr_at_10
value: 21.687
- type: mrr_at_100
value: 22.313
- type: mrr_at_1000
value: 22.422
- type: mrr_at_3
value: 20.652
- type: mrr_at_5
value: 21.146
- type: ndcg_at_1
value: 16.996
- type: ndcg_at_10
value: 21.067
- type: ndcg_at_100
value: 24.829
- type: ndcg_at_1000
value: 28.866999999999997
- type: ndcg_at_3
value: 19.466
- type: ndcg_at_5
value: 19.993
- type: precision_at_1
value: 16.996
- type: precision_at_10
value: 4.071000000000001
- type: precision_at_100
value: 0.9329999999999999
- type: precision_at_1000
value: 0.183
- type: precision_at_3
value: 9.223
- type: precision_at_5
value: 6.4030000000000005
- type: recall_at_1
value: 13.364
- type: recall_at_10
value: 25.976
- type: recall_at_100
value: 44.134
- type: recall_at_1000
value: 73.181
- type: recall_at_3
value: 20.503
- type: recall_at_5
value: 22.409000000000002
- type: map_at_1
value: 5.151
- type: map_at_10
value: 9.155000000000001
- type: map_at_100
value: 9.783999999999999
- type: map_at_1000
value: 9.879
- type: map_at_3
value: 7.825
- type: map_at_5
value: 8.637
- type: mrr_at_1
value: 5.915
- type: mrr_at_10
value: 10.34
- type: mrr_at_100
value: 10.943999999999999
- type: mrr_at_1000
value: 11.033
- type: mrr_at_3
value: 8.934000000000001
- type: mrr_at_5
value: 9.812
- type: ndcg_at_1
value: 5.915
- type: ndcg_at_10
value: 11.561
- type: ndcg_at_100
value: 14.971
- type: ndcg_at_1000
value: 17.907999999999998
- type: ndcg_at_3
value: 8.896999999999998
- type: ndcg_at_5
value: 10.313
- type: precision_at_1
value: 5.915
- type: precision_at_10
value: 2.1069999999999998
- type: precision_at_100
value: 0.414
- type: precision_at_1000
value: 0.074
- type: precision_at_3
value: 4.128
- type: precision_at_5
value: 3.327
- type: recall_at_1
value: 5.151
- type: recall_at_10
value: 17.874000000000002
- type: recall_at_100
value: 34.174
- type: recall_at_1000
value: 56.879999999999995
- type: recall_at_3
value: 10.732999999999999
- type: recall_at_5
value: 14.113000000000001
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.101
- type: map_at_10
value: 5.434
- type: map_at_100
value: 6.267
- type: map_at_1000
value: 6.418
- type: map_at_3
value: 4.377000000000001
- type: map_at_5
value: 4.841
- type: mrr_at_1
value: 7.166
- type: mrr_at_10
value: 12.012
- type: mrr_at_100
value: 13.144
- type: mrr_at_1000
value: 13.229
- type: mrr_at_3
value: 9.826
- type: mrr_at_5
value: 10.921
- type: ndcg_at_1
value: 7.166
- type: ndcg_at_10
value: 8.687000000000001
- type: ndcg_at_100
value: 13.345
- type: ndcg_at_1000
value: 16.915
- type: ndcg_at_3
value: 6.276
- type: ndcg_at_5
value: 7.013
- type: precision_at_1
value: 7.166
- type: precision_at_10
value: 2.9250000000000003
- type: precision_at_100
value: 0.771
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 4.734
- type: precision_at_5
value: 3.8830000000000005
- type: recall_at_1
value: 3.101
- type: recall_at_10
value: 11.774999999999999
- type: recall_at_100
value: 28.819
- type: recall_at_1000
value: 49.886
- type: recall_at_3
value: 5.783
- type: recall_at_5
value: 7.692
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.758
- type: map_at_10
value: 5.507
- type: map_at_100
value: 7.1819999999999995
- type: map_at_1000
value: 7.652
- type: map_at_3
value: 4.131
- type: map_at_5
value: 4.702
- type: mrr_at_1
value: 28.499999999999996
- type: mrr_at_10
value: 37.693
- type: mrr_at_100
value: 38.657000000000004
- type: mrr_at_1000
value: 38.704
- type: mrr_at_3
value: 34.792
- type: mrr_at_5
value: 36.417
- type: ndcg_at_1
value: 20.625
- type: ndcg_at_10
value: 14.771999999999998
- type: ndcg_at_100
value: 16.821
- type: ndcg_at_1000
value: 21.546000000000003
- type: ndcg_at_3
value: 16.528000000000002
- type: ndcg_at_5
value: 15.573
- type: precision_at_1
value: 28.499999999999996
- type: precision_at_10
value: 12.25
- type: precision_at_100
value: 3.7600000000000002
- type: precision_at_1000
value: 0.86
- type: precision_at_3
value: 19.167
- type: precision_at_5
value: 16.25
- type: recall_at_1
value: 2.758
- type: recall_at_10
value: 9.164
- type: recall_at_100
value: 21.022
- type: recall_at_1000
value: 37.053999999999995
- type: recall_at_3
value: 5.112
- type: recall_at_5
value: 6.413
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 28.53554681148413
- type: mrr
value: 29.290078704990325
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 76.52926207453477
- type: cos_sim_spearman
value: 68.98528351149498
- type: euclidean_pearson
value: 73.7744559091218
- type: euclidean_spearman
value: 69.03481995814735
- type: manhattan_pearson
value: 73.72818267270651
- type: manhattan_spearman
value: 69.00576442086793
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 61.71540153163407
- type: cos_sim_spearman
value: 58.502746406116614
- type: euclidean_pearson
value: 60.82817999438477
- type: euclidean_spearman
value: 58.988494433752756
- type: manhattan_pearson
value: 60.87147859170236
- type: manhattan_spearman
value: 59.03527382025516
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 72.89990498692094
- type: cos_sim_spearman
value: 74.03028513377879
- type: euclidean_pearson
value: 73.8252088833803
- type: euclidean_spearman
value: 74.15554246478399
- type: manhattan_pearson
value: 73.80947397334666
- type: manhattan_spearman
value: 74.13117958176566
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 70.67974206005906
- type: cos_sim_spearman
value: 66.18263558486296
- type: euclidean_pearson
value: 69.5048876024341
- type: euclidean_spearman
value: 66.36380457878391
- type: manhattan_pearson
value: 69.4895372451589
- type: manhattan_spearman
value: 66.36941569935124
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 73.99856913569187
- type: cos_sim_spearman
value: 75.54712054246464
- type: euclidean_pearson
value: 74.55692573876115
- type: euclidean_spearman
value: 75.34499056740096
- type: manhattan_pearson
value: 74.59342318869683
- type: manhattan_spearman
value: 75.35708317926819
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 72.3343670787494
- type: cos_sim_spearman
value: 73.7136650302399
- type: euclidean_pearson
value: 73.86004257913046
- type: euclidean_spearman
value: 73.9557418048638
- type: manhattan_pearson
value: 73.78919091538661
- type: manhattan_spearman
value: 73.86316425954108
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.08159601556619
- type: cos_sim_spearman
value: 80.13910828685532
- type: euclidean_pearson
value: 79.39197806617453
- type: euclidean_spearman
value: 79.85692277871196
- type: manhattan_pearson
value: 79.32452246324705
- type: manhattan_spearman
value: 79.70120373587193
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.29720207747786
- type: cos_sim_spearman
value: 65.65260681394685
- type: euclidean_pearson
value: 64.49002165983158
- type: euclidean_spearman
value: 65.25917651158736
- type: manhattan_pearson
value: 64.49981108236335
- type: manhattan_spearman
value: 65.20426825202405
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 71.1871068550574
- type: cos_sim_spearman
value: 71.40167034949341
- type: euclidean_pearson
value: 72.2373684855404
- type: euclidean_spearman
value: 71.90255429812984
- type: manhattan_pearson
value: 72.23173532049509
- type: manhattan_spearman
value: 71.87843489689064
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 68.65000574464773
- type: mrr
value: 88.29363084265044
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 40.76107749144358
- type: mrr
value: 41.03689202953908
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.68520527813894
- type: cos_sim_spearman
value: 29.017620841627433
- type: dot_pearson
value: 29.25380949876322
- type: dot_spearman
value: 29.33885250837327
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
| [
"SUMMARIZATION"
] | [
"BIOSSES"
] |
serdarcaglar/roberta-base-biomedical-es | serdarcaglar | fill-mask | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"es",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-09T13:27:39 | 2023-09-19T21:09:48 | 46 | 1 | ---
language:
- es
---
language:
- es
tags:
- biomedical
- spanish
metrics:
- ppl
# Biomedical language model for Spanish
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Tokenization and model pretraining](#Tokenization-pretraining)
- [Training corpora and preprocessing](#training-corpora-preprocessing)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
</details>
## Model description
Biomedical pretrained language model for Spanish.
## Intended uses and limitations
The model is ready-to-use only for masked language modelling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on downstream tasks such as Named Entity Recognition or Text Classification.
## How to use
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("serdarcaglar/roberta-base-biomedical-es")
model = AutoModelForMaskedLM.from_pretrained("serdarcaglar/roberta-base-biomedical-es")
from transformers import pipeline
unmasker = pipeline('fill-mask', model="serdarcaglar/roberta-base-biomedical-es")
unmasker("El único antecedente personal a reseñar era la <mask> arterial.")
```
```
```
## Training
### Tokenization and model pretraining
This model is a [RoBERTa-based](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model trained on a
**biomedical** corpus in Spanish collected from several sources
- medprocner
- codiesp
- emea
- wmt19
- wmt16
- wmt22
- scielo
- ibecs
- elrc datsets
The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2)
used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 52,000 tokens. The pretraining consists of a masked language model training at the subword level following the approach employed for the RoBERTa base model with the same hyperparameters as in the original work.
### Training corpora and preprocessing
The training corpus is composed of several biomedical corpora in Spanish, collected from publicly available corpora and crawlers.
To obtain a high-quality training corpus, a cleaning pipeline with the following operations has been applied:
- data parsing in different formats
- sentence splitting
- language detection
- filtering of ill-formed sentences
- deduplication of repetitive contents
- keep the original document boundaries
Finally, the corpora are concatenated and further global deduplication among the corpora have been applied.
## Evaluation
The model has been evaluated on the Named Entity Recognition (NER) using the following datasets:
Perplexity: 3.09
Please share the results you get in the NER task using this model. I can add them here.
## Additional information
### Author
Serdar ÇAĞLAR
### Contact information
Linkedin: <https://www.linkedin.com/in/serdarildercaglar/>
For further information, send an email to <[email protected]>
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner of the models be liable for any results arising from the use made by third parties of these models.
Bu havuzda yayınlanan modeller genel bir amaca yöneliktir ve üçüncü tarafların kullanımına açıktır. Bu modellerde önyargı ve diğer istenmeyen çarpıklıklar olabilir.
Üçüncü taraflar, bu modellerden herhangi birini kullanarak (veya bu modellere dayalı sistemleri kullanarak) diğer taraflara sistem ve/veya hizmet sağladıklarında veya modellerin kullanıcısı olduklarında, bunların kullanımından kaynaklanan riskleri azaltmanın ve her durumda Yapay Zeka kullanımına ilişkin düzenlemeler de dahil olmak üzere geçerli düzenlemelere uymanın kendi sorumluluklarında olduğunu unutmamalıdırlar.
Modellerin sahibi hiçbir durumda bu modellerin üçüncü şahıslar tarafından kullanımından kaynaklanan sonuçlardan sorumlu tutulamaz.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y otras distorsiones indeseables.
Cuando terceras partes, desplieguen o proporcionen sistemas y/o servicios a otras partes utilizando cualquiera de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluida la normativa relativa al uso de Inteligencia Artificial.
En ningún caso el propietario de los modelos será responsable de los resultados derivados del uso que terceros hagan de los mismos.
</details> | [
"NAMED_ENTITY_RECOGNITION",
"TEXT_CLASSIFICATION"
] | [
"CODIESP",
"SCIELO"
] |
woody72/multilingual-e5-base | woody72 | sentence-similarity | [
"sentence-transformers",
"pytorch",
"onnx",
"safetensors",
"xlm-roberta",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"arxiv:2212.03533",
"arxiv:2108.08787",
"arxiv:2104.08663",
"arxiv:2210.07316",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-05T15:18:20 | 2023-11-05T15:31:52 | 46 | 0 | ---
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
model-index:
- name: multilingual-e5-base
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 78.97014925373135
- type: ap
value: 43.69351129103008
- type: f1
value: 73.38075030070492
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.7237687366167
- type: ap
value: 82.22089859962671
- type: f1
value: 69.95532758884401
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.65517241379312
- type: ap
value: 28.507918657094738
- type: f1
value: 66.84516013726119
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.32976445396146
- type: ap
value: 20.720481637566014
- type: f1
value: 59.78002763416003
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 90.63775
- type: ap
value: 87.22277903861716
- type: f1
value: 90.60378636386807
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.546
- type: f1
value: 44.05666638370923
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.828
- type: f1
value: 41.2710255644252
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.534
- type: f1
value: 39.820743174270326
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 39.684
- type: f1
value: 39.11052682815307
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.436
- type: f1
value: 37.07082931930871
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 37.226000000000006
- type: f1
value: 36.65372077739185
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.831000000000003
- type: map_at_10
value: 36.42
- type: map_at_100
value: 37.699
- type: map_at_1000
value: 37.724000000000004
- type: map_at_3
value: 32.207
- type: map_at_5
value: 34.312
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 36.574
- type: mrr_at_100
value: 37.854
- type: mrr_at_1000
value: 37.878
- type: mrr_at_3
value: 32.385000000000005
- type: mrr_at_5
value: 34.48
- type: ndcg_at_1
value: 22.831000000000003
- type: ndcg_at_10
value: 44.230000000000004
- type: ndcg_at_100
value: 49.974000000000004
- type: ndcg_at_1000
value: 50.522999999999996
- type: ndcg_at_3
value: 35.363
- type: ndcg_at_5
value: 39.164
- type: precision_at_1
value: 22.831000000000003
- type: precision_at_10
value: 6.935
- type: precision_at_100
value: 0.9520000000000001
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 14.841
- type: precision_at_5
value: 10.754
- type: recall_at_1
value: 22.831000000000003
- type: recall_at_10
value: 69.346
- type: recall_at_100
value: 95.235
- type: recall_at_1000
value: 99.36
- type: recall_at_3
value: 44.523
- type: recall_at_5
value: 53.769999999999996
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 40.27789869854063
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 35.41979463347428
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 58.22752045109304
- type: mrr
value: 71.51112430198303
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.71147646622866
- type: cos_sim_spearman
value: 85.059167046486
- type: euclidean_pearson
value: 75.88421613600647
- type: euclidean_spearman
value: 75.12821787150585
- type: manhattan_pearson
value: 75.22005646957604
- type: manhattan_spearman
value: 74.42880434453272
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.23799582463465
- type: f1
value: 99.12665274878218
- type: precision
value: 99.07098121085595
- type: recall
value: 99.23799582463465
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.88685890380806
- type: f1
value: 97.59336708489249
- type: precision
value: 97.44662117543473
- type: recall
value: 97.88685890380806
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.47142362313821
- type: f1
value: 97.1989377670015
- type: precision
value: 97.06384944001847
- type: recall
value: 97.47142362313821
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.4728804634018
- type: f1
value: 98.2973494821836
- type: precision
value: 98.2095839915745
- type: recall
value: 98.4728804634018
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 82.74025974025975
- type: f1
value: 82.67420447730439
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.0380848063507
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 29.45956405670166
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.122
- type: map_at_10
value: 42.03
- type: map_at_100
value: 43.364000000000004
- type: map_at_1000
value: 43.474000000000004
- type: map_at_3
value: 38.804
- type: map_at_5
value: 40.585
- type: mrr_at_1
value: 39.914
- type: mrr_at_10
value: 48.227
- type: mrr_at_100
value: 49.018
- type: mrr_at_1000
value: 49.064
- type: mrr_at_3
value: 45.994
- type: mrr_at_5
value: 47.396
- type: ndcg_at_1
value: 39.914
- type: ndcg_at_10
value: 47.825
- type: ndcg_at_100
value: 52.852
- type: ndcg_at_1000
value: 54.891
- type: ndcg_at_3
value: 43.517
- type: ndcg_at_5
value: 45.493
- type: precision_at_1
value: 39.914
- type: precision_at_10
value: 8.956
- type: precision_at_100
value: 1.388
- type: precision_at_1000
value: 0.182
- type: precision_at_3
value: 20.791999999999998
- type: precision_at_5
value: 14.821000000000002
- type: recall_at_1
value: 32.122
- type: recall_at_10
value: 58.294999999999995
- type: recall_at_100
value: 79.726
- type: recall_at_1000
value: 93.099
- type: recall_at_3
value: 45.017
- type: recall_at_5
value: 51.002
- type: map_at_1
value: 29.677999999999997
- type: map_at_10
value: 38.684000000000005
- type: map_at_100
value: 39.812999999999995
- type: map_at_1000
value: 39.945
- type: map_at_3
value: 35.831
- type: map_at_5
value: 37.446
- type: mrr_at_1
value: 37.771
- type: mrr_at_10
value: 44.936
- type: mrr_at_100
value: 45.583
- type: mrr_at_1000
value: 45.634
- type: mrr_at_3
value: 42.771
- type: mrr_at_5
value: 43.994
- type: ndcg_at_1
value: 37.771
- type: ndcg_at_10
value: 44.059
- type: ndcg_at_100
value: 48.192
- type: ndcg_at_1000
value: 50.375
- type: ndcg_at_3
value: 40.172000000000004
- type: ndcg_at_5
value: 41.899
- type: precision_at_1
value: 37.771
- type: precision_at_10
value: 8.286999999999999
- type: precision_at_100
value: 1.322
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 19.406000000000002
- type: precision_at_5
value: 13.745
- type: recall_at_1
value: 29.677999999999997
- type: recall_at_10
value: 53.071
- type: recall_at_100
value: 70.812
- type: recall_at_1000
value: 84.841
- type: recall_at_3
value: 41.016000000000005
- type: recall_at_5
value: 46.22
- type: map_at_1
value: 42.675000000000004
- type: map_at_10
value: 53.93599999999999
- type: map_at_100
value: 54.806999999999995
- type: map_at_1000
value: 54.867
- type: map_at_3
value: 50.934000000000005
- type: map_at_5
value: 52.583
- type: mrr_at_1
value: 48.339
- type: mrr_at_10
value: 57.265
- type: mrr_at_100
value: 57.873
- type: mrr_at_1000
value: 57.906
- type: mrr_at_3
value: 55.193000000000005
- type: mrr_at_5
value: 56.303000000000004
- type: ndcg_at_1
value: 48.339
- type: ndcg_at_10
value: 59.19799999999999
- type: ndcg_at_100
value: 62.743
- type: ndcg_at_1000
value: 63.99399999999999
- type: ndcg_at_3
value: 54.367
- type: ndcg_at_5
value: 56.548
- type: precision_at_1
value: 48.339
- type: precision_at_10
value: 9.216000000000001
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.72
- type: precision_at_5
value: 16.025
- type: recall_at_1
value: 42.675000000000004
- type: recall_at_10
value: 71.437
- type: recall_at_100
value: 86.803
- type: recall_at_1000
value: 95.581
- type: recall_at_3
value: 58.434
- type: recall_at_5
value: 63.754
- type: map_at_1
value: 23.518
- type: map_at_10
value: 30.648999999999997
- type: map_at_100
value: 31.508999999999997
- type: map_at_1000
value: 31.604
- type: map_at_3
value: 28.247
- type: map_at_5
value: 29.65
- type: mrr_at_1
value: 25.650000000000002
- type: mrr_at_10
value: 32.771
- type: mrr_at_100
value: 33.554
- type: mrr_at_1000
value: 33.629999999999995
- type: mrr_at_3
value: 30.433
- type: mrr_at_5
value: 31.812
- type: ndcg_at_1
value: 25.650000000000002
- type: ndcg_at_10
value: 34.929
- type: ndcg_at_100
value: 39.382
- type: ndcg_at_1000
value: 41.913
- type: ndcg_at_3
value: 30.292
- type: ndcg_at_5
value: 32.629999999999995
- type: precision_at_1
value: 25.650000000000002
- type: precision_at_10
value: 5.311
- type: precision_at_100
value: 0.792
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 12.58
- type: precision_at_5
value: 8.994
- type: recall_at_1
value: 23.518
- type: recall_at_10
value: 46.19
- type: recall_at_100
value: 67.123
- type: recall_at_1000
value: 86.442
- type: recall_at_3
value: 33.678000000000004
- type: recall_at_5
value: 39.244
- type: map_at_1
value: 15.891
- type: map_at_10
value: 22.464000000000002
- type: map_at_100
value: 23.483
- type: map_at_1000
value: 23.613
- type: map_at_3
value: 20.080000000000002
- type: map_at_5
value: 21.526
- type: mrr_at_1
value: 20.025000000000002
- type: mrr_at_10
value: 26.712999999999997
- type: mrr_at_100
value: 27.650000000000002
- type: mrr_at_1000
value: 27.737000000000002
- type: mrr_at_3
value: 24.274
- type: mrr_at_5
value: 25.711000000000002
- type: ndcg_at_1
value: 20.025000000000002
- type: ndcg_at_10
value: 27.028999999999996
- type: ndcg_at_100
value: 32.064
- type: ndcg_at_1000
value: 35.188
- type: ndcg_at_3
value: 22.512999999999998
- type: ndcg_at_5
value: 24.89
- type: precision_at_1
value: 20.025000000000002
- type: precision_at_10
value: 4.776
- type: precision_at_100
value: 0.8500000000000001
- type: precision_at_1000
value: 0.125
- type: precision_at_3
value: 10.531
- type: precision_at_5
value: 7.811
- type: recall_at_1
value: 15.891
- type: recall_at_10
value: 37.261
- type: recall_at_100
value: 59.12
- type: recall_at_1000
value: 81.356
- type: recall_at_3
value: 24.741
- type: recall_at_5
value: 30.753999999999998
- type: map_at_1
value: 27.544
- type: map_at_10
value: 36.283
- type: map_at_100
value: 37.467
- type: map_at_1000
value: 37.574000000000005
- type: map_at_3
value: 33.528999999999996
- type: map_at_5
value: 35.028999999999996
- type: mrr_at_1
value: 34.166999999999994
- type: mrr_at_10
value: 41.866
- type: mrr_at_100
value: 42.666
- type: mrr_at_1000
value: 42.716
- type: mrr_at_3
value: 39.541
- type: mrr_at_5
value: 40.768
- type: ndcg_at_1
value: 34.166999999999994
- type: ndcg_at_10
value: 41.577
- type: ndcg_at_100
value: 46.687
- type: ndcg_at_1000
value: 48.967
- type: ndcg_at_3
value: 37.177
- type: ndcg_at_5
value: 39.097
- type: precision_at_1
value: 34.166999999999994
- type: precision_at_10
value: 7.420999999999999
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.154
- type: precision_at_3
value: 17.291999999999998
- type: precision_at_5
value: 12.166
- type: recall_at_1
value: 27.544
- type: recall_at_10
value: 51.99399999999999
- type: recall_at_100
value: 73.738
- type: recall_at_1000
value: 89.33
- type: recall_at_3
value: 39.179
- type: recall_at_5
value: 44.385999999999996
- type: map_at_1
value: 26.661
- type: map_at_10
value: 35.475
- type: map_at_100
value: 36.626999999999995
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 32.818000000000005
- type: map_at_5
value: 34.397
- type: mrr_at_1
value: 32.647999999999996
- type: mrr_at_10
value: 40.784
- type: mrr_at_100
value: 41.602
- type: mrr_at_1000
value: 41.661
- type: mrr_at_3
value: 38.68
- type: mrr_at_5
value: 39.838
- type: ndcg_at_1
value: 32.647999999999996
- type: ndcg_at_10
value: 40.697
- type: ndcg_at_100
value: 45.799
- type: ndcg_at_1000
value: 48.235
- type: ndcg_at_3
value: 36.516
- type: ndcg_at_5
value: 38.515
- type: precision_at_1
value: 32.647999999999996
- type: precision_at_10
value: 7.202999999999999
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_3
value: 17.314
- type: precision_at_5
value: 12.145999999999999
- type: recall_at_1
value: 26.661
- type: recall_at_10
value: 50.995000000000005
- type: recall_at_100
value: 73.065
- type: recall_at_1000
value: 89.781
- type: recall_at_3
value: 39.073
- type: recall_at_5
value: 44.395
- type: map_at_1
value: 25.946583333333333
- type: map_at_10
value: 33.79725
- type: map_at_100
value: 34.86408333333333
- type: map_at_1000
value: 34.9795
- type: map_at_3
value: 31.259999999999998
- type: map_at_5
value: 32.71541666666666
- type: mrr_at_1
value: 30.863749999999996
- type: mrr_at_10
value: 37.99183333333333
- type: mrr_at_100
value: 38.790499999999994
- type: mrr_at_1000
value: 38.85575000000001
- type: mrr_at_3
value: 35.82083333333333
- type: mrr_at_5
value: 37.07533333333333
- type: ndcg_at_1
value: 30.863749999999996
- type: ndcg_at_10
value: 38.52141666666667
- type: ndcg_at_100
value: 43.17966666666667
- type: ndcg_at_1000
value: 45.64608333333333
- type: ndcg_at_3
value: 34.333000000000006
- type: ndcg_at_5
value: 36.34975
- type: precision_at_1
value: 30.863749999999996
- type: precision_at_10
value: 6.598999999999999
- type: precision_at_100
value: 1.0502500000000001
- type: precision_at_1000
value: 0.14400000000000002
- type: precision_at_3
value: 15.557583333333334
- type: precision_at_5
value: 11.020000000000001
- type: recall_at_1
value: 25.946583333333333
- type: recall_at_10
value: 48.36991666666666
- type: recall_at_100
value: 69.02408333333334
- type: recall_at_1000
value: 86.43858333333331
- type: recall_at_3
value: 36.4965
- type: recall_at_5
value: 41.76258333333334
- type: map_at_1
value: 22.431
- type: map_at_10
value: 28.889
- type: map_at_100
value: 29.642000000000003
- type: map_at_1000
value: 29.742
- type: map_at_3
value: 26.998
- type: map_at_5
value: 28.172000000000004
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 31.763
- type: mrr_at_100
value: 32.443
- type: mrr_at_1000
value: 32.531
- type: mrr_at_3
value: 29.959000000000003
- type: mrr_at_5
value: 31.063000000000002
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 32.586999999999996
- type: ndcg_at_100
value: 36.5
- type: ndcg_at_1000
value: 39.133
- type: ndcg_at_3
value: 29.25
- type: ndcg_at_5
value: 31.023
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 4.954
- type: precision_at_100
value: 0.747
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 12.577
- type: precision_at_5
value: 8.741999999999999
- type: recall_at_1
value: 22.431
- type: recall_at_10
value: 41.134
- type: recall_at_100
value: 59.28600000000001
- type: recall_at_1000
value: 78.857
- type: recall_at_3
value: 31.926
- type: recall_at_5
value: 36.335
- type: map_at_1
value: 17.586
- type: map_at_10
value: 23.304
- type: map_at_100
value: 24.159
- type: map_at_1000
value: 24.281
- type: map_at_3
value: 21.316
- type: map_at_5
value: 22.383
- type: mrr_at_1
value: 21.645
- type: mrr_at_10
value: 27.365000000000002
- type: mrr_at_100
value: 28.108
- type: mrr_at_1000
value: 28.192
- type: mrr_at_3
value: 25.482
- type: mrr_at_5
value: 26.479999999999997
- type: ndcg_at_1
value: 21.645
- type: ndcg_at_10
value: 27.306
- type: ndcg_at_100
value: 31.496000000000002
- type: ndcg_at_1000
value: 34.53
- type: ndcg_at_3
value: 23.73
- type: ndcg_at_5
value: 25.294
- type: precision_at_1
value: 21.645
- type: precision_at_10
value: 4.797
- type: precision_at_100
value: 0.8059999999999999
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 10.850999999999999
- type: precision_at_5
value: 7.736
- type: recall_at_1
value: 17.586
- type: recall_at_10
value: 35.481
- type: recall_at_100
value: 54.534000000000006
- type: recall_at_1000
value: 76.456
- type: recall_at_3
value: 25.335
- type: recall_at_5
value: 29.473
- type: map_at_1
value: 25.095
- type: map_at_10
value: 32.374
- type: map_at_100
value: 33.537
- type: map_at_1000
value: 33.634
- type: map_at_3
value: 30.089
- type: map_at_5
value: 31.433
- type: mrr_at_1
value: 29.198
- type: mrr_at_10
value: 36.01
- type: mrr_at_100
value: 37.022
- type: mrr_at_1000
value: 37.083
- type: mrr_at_3
value: 33.94
- type: mrr_at_5
value: 35.148
- type: ndcg_at_1
value: 29.198
- type: ndcg_at_10
value: 36.729
- type: ndcg_at_100
value: 42.114000000000004
- type: ndcg_at_1000
value: 44.592
- type: ndcg_at_3
value: 32.644
- type: ndcg_at_5
value: 34.652
- type: precision_at_1
value: 29.198
- type: precision_at_10
value: 5.970000000000001
- type: precision_at_100
value: 0.967
- type: precision_at_1000
value: 0.129
- type: precision_at_3
value: 14.396999999999998
- type: precision_at_5
value: 10.093
- type: recall_at_1
value: 25.095
- type: recall_at_10
value: 46.392
- type: recall_at_100
value: 69.706
- type: recall_at_1000
value: 87.738
- type: recall_at_3
value: 35.303000000000004
- type: recall_at_5
value: 40.441
- type: map_at_1
value: 26.857999999999997
- type: map_at_10
value: 34.066
- type: map_at_100
value: 35.671
- type: map_at_1000
value: 35.881
- type: map_at_3
value: 31.304
- type: map_at_5
value: 32.885
- type: mrr_at_1
value: 32.411
- type: mrr_at_10
value: 38.987
- type: mrr_at_100
value: 39.894
- type: mrr_at_1000
value: 39.959
- type: mrr_at_3
value: 36.626999999999995
- type: mrr_at_5
value: 38.011
- type: ndcg_at_1
value: 32.411
- type: ndcg_at_10
value: 39.208
- type: ndcg_at_100
value: 44.626
- type: ndcg_at_1000
value: 47.43
- type: ndcg_at_3
value: 35.091
- type: ndcg_at_5
value: 37.119
- type: precision_at_1
value: 32.411
- type: precision_at_10
value: 7.51
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 16.14
- type: precision_at_5
value: 11.976
- type: recall_at_1
value: 26.857999999999997
- type: recall_at_10
value: 47.407
- type: recall_at_100
value: 72.236
- type: recall_at_1000
value: 90.77
- type: recall_at_3
value: 35.125
- type: recall_at_5
value: 40.522999999999996
- type: map_at_1
value: 21.3
- type: map_at_10
value: 27.412999999999997
- type: map_at_100
value: 28.29
- type: map_at_1000
value: 28.398
- type: map_at_3
value: 25.169999999999998
- type: map_at_5
value: 26.496
- type: mrr_at_1
value: 23.29
- type: mrr_at_10
value: 29.215000000000003
- type: mrr_at_100
value: 30.073
- type: mrr_at_1000
value: 30.156
- type: mrr_at_3
value: 26.956000000000003
- type: mrr_at_5
value: 28.38
- type: ndcg_at_1
value: 23.29
- type: ndcg_at_10
value: 31.113000000000003
- type: ndcg_at_100
value: 35.701
- type: ndcg_at_1000
value: 38.505
- type: ndcg_at_3
value: 26.727
- type: ndcg_at_5
value: 29.037000000000003
- type: precision_at_1
value: 23.29
- type: precision_at_10
value: 4.787
- type: precision_at_100
value: 0.763
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 11.091
- type: precision_at_5
value: 7.985
- type: recall_at_1
value: 21.3
- type: recall_at_10
value: 40.782000000000004
- type: recall_at_100
value: 62.13999999999999
- type: recall_at_1000
value: 83.012
- type: recall_at_3
value: 29.131
- type: recall_at_5
value: 34.624
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.631
- type: map_at_10
value: 16.634999999999998
- type: map_at_100
value: 18.23
- type: map_at_1000
value: 18.419
- type: map_at_3
value: 13.66
- type: map_at_5
value: 15.173
- type: mrr_at_1
value: 21.368000000000002
- type: mrr_at_10
value: 31.56
- type: mrr_at_100
value: 32.58
- type: mrr_at_1000
value: 32.633
- type: mrr_at_3
value: 28.241
- type: mrr_at_5
value: 30.225
- type: ndcg_at_1
value: 21.368000000000002
- type: ndcg_at_10
value: 23.855999999999998
- type: ndcg_at_100
value: 30.686999999999998
- type: ndcg_at_1000
value: 34.327000000000005
- type: ndcg_at_3
value: 18.781
- type: ndcg_at_5
value: 20.73
- type: precision_at_1
value: 21.368000000000002
- type: precision_at_10
value: 7.564
- type: precision_at_100
value: 1.496
- type: precision_at_1000
value: 0.217
- type: precision_at_3
value: 13.876
- type: precision_at_5
value: 11.062
- type: recall_at_1
value: 9.631
- type: recall_at_10
value: 29.517
- type: recall_at_100
value: 53.452
- type: recall_at_1000
value: 74.115
- type: recall_at_3
value: 17.605999999999998
- type: recall_at_5
value: 22.505
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.885
- type: map_at_10
value: 18.798000000000002
- type: map_at_100
value: 26.316
- type: map_at_1000
value: 27.869
- type: map_at_3
value: 13.719000000000001
- type: map_at_5
value: 15.716
- type: mrr_at_1
value: 66
- type: mrr_at_10
value: 74.263
- type: mrr_at_100
value: 74.519
- type: mrr_at_1000
value: 74.531
- type: mrr_at_3
value: 72.458
- type: mrr_at_5
value: 73.321
- type: ndcg_at_1
value: 53.87499999999999
- type: ndcg_at_10
value: 40.355999999999995
- type: ndcg_at_100
value: 44.366
- type: ndcg_at_1000
value: 51.771
- type: ndcg_at_3
value: 45.195
- type: ndcg_at_5
value: 42.187000000000005
- type: precision_at_1
value: 66
- type: precision_at_10
value: 31.75
- type: precision_at_100
value: 10.11
- type: precision_at_1000
value: 1.9800000000000002
- type: precision_at_3
value: 48.167
- type: precision_at_5
value: 40.050000000000004
- type: recall_at_1
value: 8.885
- type: recall_at_10
value: 24.471999999999998
- type: recall_at_100
value: 49.669000000000004
- type: recall_at_1000
value: 73.383
- type: recall_at_3
value: 14.872
- type: recall_at_5
value: 18.262999999999998
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.18
- type: f1
value: 40.26878691789978
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 62.751999999999995
- type: map_at_10
value: 74.131
- type: map_at_100
value: 74.407
- type: map_at_1000
value: 74.423
- type: map_at_3
value: 72.329
- type: map_at_5
value: 73.555
- type: mrr_at_1
value: 67.282
- type: mrr_at_10
value: 78.292
- type: mrr_at_100
value: 78.455
- type: mrr_at_1000
value: 78.458
- type: mrr_at_3
value: 76.755
- type: mrr_at_5
value: 77.839
- type: ndcg_at_1
value: 67.282
- type: ndcg_at_10
value: 79.443
- type: ndcg_at_100
value: 80.529
- type: ndcg_at_1000
value: 80.812
- type: ndcg_at_3
value: 76.281
- type: ndcg_at_5
value: 78.235
- type: precision_at_1
value: 67.282
- type: precision_at_10
value: 10.078
- type: precision_at_100
value: 1.082
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 30.178
- type: precision_at_5
value: 19.232
- type: recall_at_1
value: 62.751999999999995
- type: recall_at_10
value: 91.521
- type: recall_at_100
value: 95.997
- type: recall_at_1000
value: 97.775
- type: recall_at_3
value: 83.131
- type: recall_at_5
value: 87.93299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.861
- type: map_at_10
value: 30.252000000000002
- type: map_at_100
value: 32.082
- type: map_at_1000
value: 32.261
- type: map_at_3
value: 25.909
- type: map_at_5
value: 28.296
- type: mrr_at_1
value: 37.346000000000004
- type: mrr_at_10
value: 45.802
- type: mrr_at_100
value: 46.611999999999995
- type: mrr_at_1000
value: 46.659
- type: mrr_at_3
value: 43.056
- type: mrr_at_5
value: 44.637
- type: ndcg_at_1
value: 37.346000000000004
- type: ndcg_at_10
value: 38.169
- type: ndcg_at_100
value: 44.864
- type: ndcg_at_1000
value: 47.974
- type: ndcg_at_3
value: 33.619
- type: ndcg_at_5
value: 35.317
- type: precision_at_1
value: 37.346000000000004
- type: precision_at_10
value: 10.693999999999999
- type: precision_at_100
value: 1.775
- type: precision_at_1000
value: 0.231
- type: precision_at_3
value: 22.325
- type: precision_at_5
value: 16.852
- type: recall_at_1
value: 18.861
- type: recall_at_10
value: 45.672000000000004
- type: recall_at_100
value: 70.60499999999999
- type: recall_at_1000
value: 89.216
- type: recall_at_3
value: 30.361
- type: recall_at_5
value: 36.998999999999995
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.852999999999994
- type: map_at_10
value: 59.961
- type: map_at_100
value: 60.78
- type: map_at_1000
value: 60.843
- type: map_at_3
value: 56.39999999999999
- type: map_at_5
value: 58.646
- type: mrr_at_1
value: 75.70599999999999
- type: mrr_at_10
value: 82.321
- type: mrr_at_100
value: 82.516
- type: mrr_at_1000
value: 82.525
- type: mrr_at_3
value: 81.317
- type: mrr_at_5
value: 81.922
- type: ndcg_at_1
value: 75.70599999999999
- type: ndcg_at_10
value: 68.557
- type: ndcg_at_100
value: 71.485
- type: ndcg_at_1000
value: 72.71600000000001
- type: ndcg_at_3
value: 63.524
- type: ndcg_at_5
value: 66.338
- type: precision_at_1
value: 75.70599999999999
- type: precision_at_10
value: 14.463000000000001
- type: precision_at_100
value: 1.677
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 40.806
- type: precision_at_5
value: 26.709
- type: recall_at_1
value: 37.852999999999994
- type: recall_at_10
value: 72.316
- type: recall_at_100
value: 83.842
- type: recall_at_1000
value: 91.999
- type: recall_at_3
value: 61.209
- type: recall_at_5
value: 66.77199999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.46039999999999
- type: ap
value: 79.9812521351881
- type: f1
value: 85.31722909702084
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.704
- type: map_at_10
value: 35.329
- type: map_at_100
value: 36.494
- type: map_at_1000
value: 36.541000000000004
- type: map_at_3
value: 31.476
- type: map_at_5
value: 33.731
- type: mrr_at_1
value: 23.294999999999998
- type: mrr_at_10
value: 35.859
- type: mrr_at_100
value: 36.968
- type: mrr_at_1000
value: 37.008
- type: mrr_at_3
value: 32.085
- type: mrr_at_5
value: 34.299
- type: ndcg_at_1
value: 23.324
- type: ndcg_at_10
value: 42.274
- type: ndcg_at_100
value: 47.839999999999996
- type: ndcg_at_1000
value: 48.971
- type: ndcg_at_3
value: 34.454
- type: ndcg_at_5
value: 38.464
- type: precision_at_1
value: 23.324
- type: precision_at_10
value: 6.648
- type: precision_at_100
value: 0.9440000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.674999999999999
- type: precision_at_5
value: 10.850999999999999
- type: recall_at_1
value: 22.704
- type: recall_at_10
value: 63.660000000000004
- type: recall_at_100
value: 89.29899999999999
- type: recall_at_1000
value: 97.88900000000001
- type: recall_at_3
value: 42.441
- type: recall_at_5
value: 52.04
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.1326949384405
- type: f1
value: 92.89743579612082
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.62524654832347
- type: f1
value: 88.65106082263151
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.59039359573046
- type: f1
value: 90.31532892105662
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.21046038208581
- type: f1
value: 86.41459529813113
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.3180351380423
- type: f1
value: 86.71383078226444
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 86.24231464737792
- type: f1
value: 86.31845567592403
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.27131782945736
- type: f1
value: 57.52079940417103
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.2341504649197
- type: f1
value: 51.349951558039244
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.27418278852569
- type: f1
value: 50.1714985749095
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 67.68243031631694
- type: f1
value: 50.1066160836192
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 69.2362854069559
- type: f1
value: 48.821279948766424
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 71.71428571428571
- type: f1
value: 53.94611389496195
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.97646267652992
- type: f1
value: 57.26797883561521
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 53.65501008742435
- type: f1
value: 50.416258382177034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.45796906523201
- type: f1
value: 53.306690547422185
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.59246805648957
- type: f1
value: 59.818381969051494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.126429051782104
- type: f1
value: 58.25993593933026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 50.057162071284466
- type: f1
value: 46.96095728790911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.64425016812375
- type: f1
value: 62.858291698755764
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.08944182918628
- type: f1
value: 62.44639030604241
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.68056489576328
- type: f1
value: 61.775326758789504
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.11163416274377
- type: f1
value: 69.70789096927015
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.40282447881641
- type: f1
value: 66.38492065671895
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.24613315400134
- type: f1
value: 64.3348019501336
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.78345662407531
- type: f1
value: 62.21279452354622
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.9455279085407
- type: f1
value: 65.48193124964094
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.05110961667788
- type: f1
value: 58.097856564684534
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.95292535305985
- type: f1
value: 62.09182174767901
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.97310020174848
- type: f1
value: 61.14252567730396
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.08069939475453
- type: f1
value: 57.044041742492034
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.63752521856085
- type: f1
value: 63.889340907205316
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.385339609952936
- type: f1
value: 53.449033750088304
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.93073301950234
- type: f1
value: 65.9884357824104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.94418291862812
- type: f1
value: 66.48740222583132
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.26025554808339
- type: f1
value: 50.19562815100793
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 48.98789509078682
- type: f1
value: 46.65788438676836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 44.68728984532616
- type: f1
value: 41.642419349541996
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.19300605245461
- type: f1
value: 55.8626492442437
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33826496301278
- type: f1
value: 63.89499791648792
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.33960995292536
- type: f1
value: 57.15242464180892
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.09347679892402
- type: f1
value: 59.64733214063841
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.75924680564896
- type: f1
value: 55.96585692366827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.48486886348352
- type: f1
value: 59.45143559032946
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.56422326832549
- type: f1
value: 54.96368702901926
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.18022864828512
- type: f1
value: 63.05369805040634
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.30329522528581
- type: f1
value: 64.06084612020727
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.36919973100201
- type: f1
value: 65.12154124788887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.98117014122394
- type: f1
value: 66.41847559806962
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.53799596503026
- type: f1
value: 62.17067330740817
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.01815736381977
- type: f1
value: 66.24988369607843
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.34700739744452
- type: f1
value: 59.957933424941636
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.23402824478815
- type: f1
value: 57.98836976018471
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.43849680666855
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 55.998655010087425
- type: f1
value: 52.83737515406804
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.71217215870882
- type: f1
value: 55.051794977833026
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 59.724277067921996
- type: f1
value: 56.33485571838306
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.59515803631473
- type: f1
value: 64.96772366193588
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.860793544048406
- type: f1
value: 58.148845819115394
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.40753194351043
- type: f1
value: 63.18903778054698
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.52320107599194
- type: f1
value: 58.356144563398516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.17014122394083
- type: f1
value: 63.919964062638925
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.15601882985878
- type: f1
value: 67.01451905761371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.65030262273034
- type: f1
value: 64.14420425129063
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.08742434431743
- type: f1
value: 63.044060042311756
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.52387357094821
- type: f1
value: 56.82398588814534
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.239408204438476
- type: f1
value: 61.92570286170469
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.74915938130463
- type: f1
value: 62.130740689396276
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.00336247478144
- type: f1
value: 63.71080635228055
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 52.837928715534645
- type: f1
value: 50.390741680320836
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.42098184263618
- type: f1
value: 71.41355113538995
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.95359784801613
- type: f1
value: 71.42699340156742
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.18157363819772
- type: f1
value: 69.74836113037671
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 76.78000685068261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.5030262273033
- type: f1
value: 71.71620130425673
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.24546065904505
- type: f1
value: 69.07638311730359
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.12911903160726
- type: f1
value: 68.32651736539815
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195025
- type: f1
value: 71.33986549860187
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.44451916610626
- type: f1
value: 66.90192664503866
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.16274377942166
- type: f1
value: 68.01090953775066
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.75319435104237
- type: f1
value: 70.18035309201403
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.14391392064559
- type: f1
value: 61.48286540778145
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.70275722932078
- type: f1
value: 70.26164779846495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.93813046402153
- type: f1
value: 58.8852862116525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.320107599193
- type: f1
value: 72.19836409602924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.65366509751176
- type: f1
value: 74.55188288799579
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.694014794889036
- type: f1
value: 58.11353311721067
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 54.37457969065231
- type: f1
value: 52.81306134311697
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 48.3086751849361
- type: f1
value: 45.396449765419376
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.151983860121064
- type: f1
value: 60.31762544281696
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.44788164088769
- type: f1
value: 71.68150151736367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.81439139206455
- type: f1
value: 62.06735559105593
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.04303967720242
- type: f1
value: 66.68298851670133
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.43913920645595
- type: f1
value: 60.25605977560783
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.90316072629456
- type: f1
value: 65.1325924692381
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 61.63752521856086
- type: f1
value: 59.14284778039585
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.63080026899797
- type: f1
value: 70.89771864626877
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.10827168796234
- type: f1
value: 71.71954219691159
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.59515803631471
- type: f1
value: 70.05040128099003
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.83389374579691
- type: f1
value: 70.84877936562735
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.18628110289173
- type: f1
value: 68.97232927921841
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.99260255548083
- type: f1
value: 72.85139492157732
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.26227303295225
- type: f1
value: 65.08833655469431
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48621385339611
- type: f1
value: 64.43483199071298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.14391392064559
- type: f1
value: 72.2580822579741
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.88567585743107
- type: f1
value: 58.3073765932569
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.38399462004034
- type: f1
value: 60.82139544252606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.58574310692671
- type: f1
value: 60.71443370385374
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.61398789509079
- type: f1
value: 70.99761812049401
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 62.73705447209146
- type: f1
value: 61.680849331794796
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.66778749159381
- type: f1
value: 71.17320646080115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.640215198386
- type: f1
value: 63.301805157015444
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.00672494956288
- type: f1
value: 70.26005548582106
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.42030934767989
- type: f1
value: 75.2074842882598
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.69266980497646
- type: f1
value: 70.94103167391192
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 28.91697191169135
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.434000079573313
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.96683513343383
- type: mrr
value: 31.967364078714834
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.5280000000000005
- type: map_at_10
value: 11.793
- type: map_at_100
value: 14.496999999999998
- type: map_at_1000
value: 15.783
- type: map_at_3
value: 8.838
- type: map_at_5
value: 10.07
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 51.531000000000006
- type: mrr_at_100
value: 52.205
- type: mrr_at_1000
value: 52.242999999999995
- type: mrr_at_3
value: 49.431999999999995
- type: mrr_at_5
value: 50.470000000000006
- type: ndcg_at_1
value: 42.415000000000006
- type: ndcg_at_10
value: 32.464999999999996
- type: ndcg_at_100
value: 28.927999999999997
- type: ndcg_at_1000
value: 37.629000000000005
- type: ndcg_at_3
value: 37.845
- type: ndcg_at_5
value: 35.147
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 23.932000000000002
- type: precision_at_100
value: 7.17
- type: precision_at_1000
value: 1.967
- type: precision_at_3
value: 35.397
- type: precision_at_5
value: 29.907
- type: recall_at_1
value: 5.5280000000000005
- type: recall_at_10
value: 15.568000000000001
- type: recall_at_100
value: 28.54
- type: recall_at_1000
value: 59.864
- type: recall_at_3
value: 9.822000000000001
- type: recall_at_5
value: 11.726
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.041000000000004
- type: map_at_10
value: 52.664
- type: map_at_100
value: 53.477
- type: map_at_1000
value: 53.505
- type: map_at_3
value: 48.510999999999996
- type: map_at_5
value: 51.036
- type: mrr_at_1
value: 41.338
- type: mrr_at_10
value: 55.071000000000005
- type: mrr_at_100
value: 55.672
- type: mrr_at_1000
value: 55.689
- type: mrr_at_3
value: 51.82
- type: mrr_at_5
value: 53.852
- type: ndcg_at_1
value: 41.338
- type: ndcg_at_10
value: 60.01800000000001
- type: ndcg_at_100
value: 63.409000000000006
- type: ndcg_at_1000
value: 64.017
- type: ndcg_at_3
value: 52.44799999999999
- type: ndcg_at_5
value: 56.571000000000005
- type: precision_at_1
value: 41.338
- type: precision_at_10
value: 9.531
- type: precision_at_100
value: 1.145
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.416
- type: precision_at_5
value: 16.46
- type: recall_at_1
value: 37.041000000000004
- type: recall_at_10
value: 79.76299999999999
- type: recall_at_100
value: 94.39
- type: recall_at_1000
value: 98.851
- type: recall_at_3
value: 60.465
- type: recall_at_5
value: 69.906
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 69.952
- type: map_at_10
value: 83.758
- type: map_at_100
value: 84.406
- type: map_at_1000
value: 84.425
- type: map_at_3
value: 80.839
- type: map_at_5
value: 82.646
- type: mrr_at_1
value: 80.62
- type: mrr_at_10
value: 86.947
- type: mrr_at_100
value: 87.063
- type: mrr_at_1000
value: 87.064
- type: mrr_at_3
value: 85.96000000000001
- type: mrr_at_5
value: 86.619
- type: ndcg_at_1
value: 80.63
- type: ndcg_at_10
value: 87.64800000000001
- type: ndcg_at_100
value: 88.929
- type: ndcg_at_1000
value: 89.054
- type: ndcg_at_3
value: 84.765
- type: ndcg_at_5
value: 86.291
- type: precision_at_1
value: 80.63
- type: precision_at_10
value: 13.314
- type: precision_at_100
value: 1.525
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.1
- type: precision_at_5
value: 24.372
- type: recall_at_1
value: 69.952
- type: recall_at_10
value: 94.955
- type: recall_at_100
value: 99.38
- type: recall_at_1000
value: 99.96000000000001
- type: recall_at_3
value: 86.60600000000001
- type: recall_at_5
value: 90.997
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 42.41329517878427
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 55.171278362748666
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.213
- type: map_at_10
value: 9.895
- type: map_at_100
value: 11.776
- type: map_at_1000
value: 12.084
- type: map_at_3
value: 7.2669999999999995
- type: map_at_5
value: 8.620999999999999
- type: mrr_at_1
value: 20.8
- type: mrr_at_10
value: 31.112000000000002
- type: mrr_at_100
value: 32.274
- type: mrr_at_1000
value: 32.35
- type: mrr_at_3
value: 28.133000000000003
- type: mrr_at_5
value: 29.892999999999997
- type: ndcg_at_1
value: 20.8
- type: ndcg_at_10
value: 17.163999999999998
- type: ndcg_at_100
value: 24.738
- type: ndcg_at_1000
value: 30.316
- type: ndcg_at_3
value: 16.665
- type: ndcg_at_5
value: 14.478
- type: precision_at_1
value: 20.8
- type: precision_at_10
value: 8.74
- type: precision_at_100
value: 1.963
- type: precision_at_1000
value: 0.33
- type: precision_at_3
value: 15.467
- type: precision_at_5
value: 12.6
- type: recall_at_1
value: 4.213
- type: recall_at_10
value: 17.698
- type: recall_at_100
value: 39.838
- type: recall_at_1000
value: 66.893
- type: recall_at_3
value: 9.418
- type: recall_at_5
value: 12.773000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.90453315738294
- type: cos_sim_spearman
value: 78.51197850080254
- type: euclidean_pearson
value: 80.09647123597748
- type: euclidean_spearman
value: 78.63548011514061
- type: manhattan_pearson
value: 80.10645285675231
- type: manhattan_spearman
value: 78.57861806068901
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.2616156846401
- type: cos_sim_spearman
value: 76.69713867850156
- type: euclidean_pearson
value: 77.97948563800394
- type: euclidean_spearman
value: 74.2371211567807
- type: manhattan_pearson
value: 77.69697879669705
- type: manhattan_spearman
value: 73.86529778022278
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 77.0293269315045
- type: cos_sim_spearman
value: 78.02555120584198
- type: euclidean_pearson
value: 78.25398100379078
- type: euclidean_spearman
value: 78.66963870599464
- type: manhattan_pearson
value: 78.14314682167348
- type: manhattan_spearman
value: 78.57692322969135
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.16989925136942
- type: cos_sim_spearman
value: 76.5996225327091
- type: euclidean_pearson
value: 77.8319003279786
- type: euclidean_spearman
value: 76.42824009468998
- type: manhattan_pearson
value: 77.69118862737736
- type: manhattan_spearman
value: 76.25568104762812
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.42012286935325
- type: cos_sim_spearman
value: 88.15654297884122
- type: euclidean_pearson
value: 87.34082819427852
- type: euclidean_spearman
value: 88.06333589547084
- type: manhattan_pearson
value: 87.25115596784842
- type: manhattan_spearman
value: 87.9559927695203
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.88222044996712
- type: cos_sim_spearman
value: 84.28476589061077
- type: euclidean_pearson
value: 83.17399758058309
- type: euclidean_spearman
value: 83.85497357244542
- type: manhattan_pearson
value: 83.0308397703786
- type: manhattan_spearman
value: 83.71554539935046
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.20682986257339
- type: cos_sim_spearman
value: 79.94567120362092
- type: euclidean_pearson
value: 79.43122480368902
- type: euclidean_spearman
value: 79.94802077264987
- type: manhattan_pearson
value: 79.32653021527081
- type: manhattan_spearman
value: 79.80961146709178
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 74.46578144394383
- type: cos_sim_spearman
value: 74.52496637472179
- type: euclidean_pearson
value: 72.2903807076809
- type: euclidean_spearman
value: 73.55549359771645
- type: manhattan_pearson
value: 72.09324837709393
- type: manhattan_spearman
value: 73.36743103606581
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 71.37272335116
- type: cos_sim_spearman
value: 71.26702117766037
- type: euclidean_pearson
value: 67.114829954434
- type: euclidean_spearman
value: 66.37938893947761
- type: manhattan_pearson
value: 66.79688574095246
- type: manhattan_spearman
value: 66.17292828079667
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.61016770129092
- type: cos_sim_spearman
value: 82.08515426632214
- type: euclidean_pearson
value: 80.557340361131
- type: euclidean_spearman
value: 80.37585812266175
- type: manhattan_pearson
value: 80.6782873404285
- type: manhattan_spearman
value: 80.6678073032024
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.00150745350108
- type: cos_sim_spearman
value: 87.83441972211425
- type: euclidean_pearson
value: 87.94826702308792
- type: euclidean_spearman
value: 87.46143974860725
- type: manhattan_pearson
value: 87.97560344306105
- type: manhattan_spearman
value: 87.5267102829796
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 64.76325252267235
- type: cos_sim_spearman
value: 63.32615095463905
- type: euclidean_pearson
value: 64.07920669155716
- type: euclidean_spearman
value: 61.21409893072176
- type: manhattan_pearson
value: 64.26308625680016
- type: manhattan_spearman
value: 61.2438185254079
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.82644463022595
- type: cos_sim_spearman
value: 76.50381269945073
- type: euclidean_pearson
value: 75.1328548315934
- type: euclidean_spearman
value: 75.63761139408453
- type: manhattan_pearson
value: 75.18610101241407
- type: manhattan_spearman
value: 75.30669266354164
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.49994164686832
- type: cos_sim_spearman
value: 86.73743986245549
- type: euclidean_pearson
value: 86.8272894387145
- type: euclidean_spearman
value: 85.97608491000507
- type: manhattan_pearson
value: 86.74960140396779
- type: manhattan_spearman
value: 85.79285984190273
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.58172210788469
- type: cos_sim_spearman
value: 80.17516468334607
- type: euclidean_pearson
value: 77.56537843470504
- type: euclidean_spearman
value: 77.57264627395521
- type: manhattan_pearson
value: 78.09703521695943
- type: manhattan_spearman
value: 78.15942760916954
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.7589932931751
- type: cos_sim_spearman
value: 80.15210089028162
- type: euclidean_pearson
value: 77.54135223516057
- type: euclidean_spearman
value: 77.52697996368764
- type: manhattan_pearson
value: 77.65734439572518
- type: manhattan_spearman
value: 77.77702992016121
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 79.16682365511267
- type: cos_sim_spearman
value: 79.25311267628506
- type: euclidean_pearson
value: 77.54882036762244
- type: euclidean_spearman
value: 77.33212935194827
- type: manhattan_pearson
value: 77.98405516064015
- type: manhattan_spearman
value: 77.85075717865719
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.10473294775917
- type: cos_sim_spearman
value: 61.82780474476838
- type: euclidean_pearson
value: 45.885111672377256
- type: euclidean_spearman
value: 56.88306351932454
- type: manhattan_pearson
value: 46.101218127323186
- type: manhattan_spearman
value: 56.80953694186333
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 45.781923079584146
- type: cos_sim_spearman
value: 55.95098449691107
- type: euclidean_pearson
value: 25.4571031323205
- type: euclidean_spearman
value: 49.859978118078935
- type: manhattan_pearson
value: 25.624938455041384
- type: manhattan_spearman
value: 49.99546185049401
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 60.00618133997907
- type: cos_sim_spearman
value: 66.57896677718321
- type: euclidean_pearson
value: 42.60118466388821
- type: euclidean_spearman
value: 62.8210759715209
- type: manhattan_pearson
value: 42.63446860604094
- type: manhattan_spearman
value: 62.73803068925271
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 28.460759121626943
- type: cos_sim_spearman
value: 34.13459007469131
- type: euclidean_pearson
value: 6.0917739325525195
- type: euclidean_spearman
value: 27.9947262664867
- type: manhattan_pearson
value: 6.16877864169911
- type: manhattan_spearman
value: 28.00664163971514
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.42546621771696
- type: cos_sim_spearman
value: 63.699663168970474
- type: euclidean_pearson
value: 38.12085278789738
- type: euclidean_spearman
value: 58.12329140741536
- type: manhattan_pearson
value: 37.97364549443335
- type: manhattan_spearman
value: 57.81545502318733
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 46.82241380954213
- type: cos_sim_spearman
value: 57.86569456006391
- type: euclidean_pearson
value: 31.80480070178813
- type: euclidean_spearman
value: 52.484000620130104
- type: manhattan_pearson
value: 31.952708554646097
- type: manhattan_spearman
value: 52.8560972356195
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 52.00447170498087
- type: cos_sim_spearman
value: 60.664116225735164
- type: euclidean_pearson
value: 33.87382555421702
- type: euclidean_spearman
value: 55.74649067458667
- type: manhattan_pearson
value: 33.99117246759437
- type: manhattan_spearman
value: 55.98749034923899
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 58.06497233105448
- type: cos_sim_spearman
value: 65.62968801135676
- type: euclidean_pearson
value: 47.482076613243905
- type: euclidean_spearman
value: 62.65137791498299
- type: manhattan_pearson
value: 47.57052626104093
- type: manhattan_spearman
value: 62.436916516613294
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 70.49397298562575
- type: cos_sim_spearman
value: 74.79604041187868
- type: euclidean_pearson
value: 49.661891561317795
- type: euclidean_spearman
value: 70.31535537621006
- type: manhattan_pearson
value: 49.553715741850006
- type: manhattan_spearman
value: 70.24779344636806
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.640574515348696
- type: cos_sim_spearman
value: 54.927959317689
- type: euclidean_pearson
value: 29.00139666967476
- type: euclidean_spearman
value: 41.86386566971605
- type: manhattan_pearson
value: 29.47411067730344
- type: manhattan_spearman
value: 42.337438424952786
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 68.14095292259312
- type: cos_sim_spearman
value: 73.99017581234789
- type: euclidean_pearson
value: 46.46304297872084
- type: euclidean_spearman
value: 60.91834114800041
- type: manhattan_pearson
value: 47.07072666338692
- type: manhattan_spearman
value: 61.70415727977926
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 73.27184653359575
- type: cos_sim_spearman
value: 77.76070252418626
- type: euclidean_pearson
value: 62.30586577544778
- type: euclidean_spearman
value: 75.14246629110978
- type: manhattan_pearson
value: 62.328196884927046
- type: manhattan_spearman
value: 75.1282792981433
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.59448528829957
- type: cos_sim_spearman
value: 70.37277734222123
- type: euclidean_pearson
value: 57.63145565721123
- type: euclidean_spearman
value: 66.10113048304427
- type: manhattan_pearson
value: 57.18897811586808
- type: manhattan_spearman
value: 66.5595511215901
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.37520607720838
- type: cos_sim_spearman
value: 69.92282148997948
- type: euclidean_pearson
value: 40.55768770125291
- type: euclidean_spearman
value: 55.189128944669605
- type: manhattan_pearson
value: 41.03566433468883
- type: manhattan_spearman
value: 55.61251893174558
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.791929533771835
- type: cos_sim_spearman
value: 66.45819707662093
- type: euclidean_pearson
value: 39.03686018511092
- type: euclidean_spearman
value: 56.01282695640428
- type: manhattan_pearson
value: 38.91586623619632
- type: manhattan_spearman
value: 56.69394943612747
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.82224468473866
- type: cos_sim_spearman
value: 59.467307194781164
- type: euclidean_pearson
value: 27.428459190256145
- type: euclidean_spearman
value: 60.83463107397519
- type: manhattan_pearson
value: 27.487391578496638
- type: manhattan_spearman
value: 61.281380460246496
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 16.306666792752644
- type: cos_sim_spearman
value: 39.35486427252405
- type: euclidean_pearson
value: -2.7887154897955435
- type: euclidean_spearman
value: 27.1296051831719
- type: manhattan_pearson
value: -3.202291270581297
- type: manhattan_spearman
value: 26.32895849218158
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.67006803805076
- type: cos_sim_spearman
value: 73.24670207647144
- type: euclidean_pearson
value: 46.91884681500483
- type: euclidean_spearman
value: 16.903085094570333
- type: manhattan_pearson
value: 46.88391675325812
- type: manhattan_spearman
value: 28.17180849095055
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 83.79555591223837
- type: cos_sim_spearman
value: 85.63658602085185
- type: euclidean_pearson
value: 85.22080894037671
- type: euclidean_spearman
value: 85.54113580167038
- type: manhattan_pearson
value: 85.1639505960118
- type: manhattan_spearman
value: 85.43502665436196
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.73900991689766
- type: mrr
value: 94.81624131133934
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 55.678000000000004
- type: map_at_10
value: 65.135
- type: map_at_100
value: 65.824
- type: map_at_1000
value: 65.852
- type: map_at_3
value: 62.736000000000004
- type: map_at_5
value: 64.411
- type: mrr_at_1
value: 58.333
- type: mrr_at_10
value: 66.5
- type: mrr_at_100
value: 67.053
- type: mrr_at_1000
value: 67.08
- type: mrr_at_3
value: 64.944
- type: mrr_at_5
value: 65.89399999999999
- type: ndcg_at_1
value: 58.333
- type: ndcg_at_10
value: 69.34700000000001
- type: ndcg_at_100
value: 72.32
- type: ndcg_at_1000
value: 73.014
- type: ndcg_at_3
value: 65.578
- type: ndcg_at_5
value: 67.738
- type: precision_at_1
value: 58.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.444
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 55.678000000000004
- type: recall_at_10
value: 80.72200000000001
- type: recall_at_100
value: 93.93299999999999
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 70.783
- type: recall_at_5
value: 75.978
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74653465346535
- type: cos_sim_ap
value: 93.01476369929063
- type: cos_sim_f1
value: 86.93009118541033
- type: cos_sim_precision
value: 88.09034907597535
- type: cos_sim_recall
value: 85.8
- type: dot_accuracy
value: 99.22970297029703
- type: dot_ap
value: 51.58725659485144
- type: dot_f1
value: 53.51351351351352
- type: dot_precision
value: 58.235294117647065
- type: dot_recall
value: 49.5
- type: euclidean_accuracy
value: 99.74356435643564
- type: euclidean_ap
value: 92.40332894384368
- type: euclidean_f1
value: 86.97838109602817
- type: euclidean_precision
value: 87.46208291203236
- type: euclidean_recall
value: 86.5
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 92.01320815721121
- type: manhattan_f1
value: 86.4135864135864
- type: manhattan_precision
value: 86.32734530938124
- type: manhattan_recall
value: 86.5
- type: max_accuracy
value: 99.74653465346535
- type: max_ap
value: 93.01476369929063
- type: max_f1
value: 86.97838109602817
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 55.2660514302523
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 30.4637783572547
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.41377758357637
- type: mrr
value: 50.138451213818854
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 28.887846011166594
- type: cos_sim_spearman
value: 30.10823258355903
- type: dot_pearson
value: 12.888049550236385
- type: dot_spearman
value: 12.827495903098123
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.21
- type: map_at_10
value: 1.667
- type: map_at_100
value: 9.15
- type: map_at_1000
value: 22.927
- type: map_at_3
value: 0.573
- type: map_at_5
value: 0.915
- type: mrr_at_1
value: 80
- type: mrr_at_10
value: 87.167
- type: mrr_at_100
value: 87.167
- type: mrr_at_1000
value: 87.167
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 87.167
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 69.757
- type: ndcg_at_100
value: 52.402
- type: ndcg_at_1000
value: 47.737
- type: ndcg_at_3
value: 71.866
- type: ndcg_at_5
value: 72.225
- type: precision_at_1
value: 80
- type: precision_at_10
value: 75
- type: precision_at_100
value: 53.959999999999994
- type: precision_at_1000
value: 21.568
- type: precision_at_3
value: 76.667
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.21
- type: recall_at_10
value: 1.9189999999999998
- type: recall_at_100
value: 12.589
- type: recall_at_1000
value: 45.312000000000005
- type: recall_at_3
value: 0.61
- type: recall_at_5
value: 1.019
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.10000000000001
- type: f1
value: 90.06
- type: precision
value: 89.17333333333333
- type: recall
value: 92.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.06936416184971
- type: f1
value: 50.87508028259473
- type: precision
value: 48.97398843930635
- type: recall
value: 56.06936416184971
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.3170731707317
- type: f1
value: 52.96080139372822
- type: precision
value: 51.67861124382864
- type: recall
value: 57.3170731707317
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.67333333333333
- type: precision
value: 91.90833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97.07333333333332
- type: precision
value: 96.79500000000002
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.2
- type: precision
value: 92.48333333333333
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.9
- type: f1
value: 91.26666666666667
- type: precision
value: 90.59444444444445
- type: recall
value: 92.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 34.32835820895522
- type: f1
value: 29.074180380150533
- type: precision
value: 28.068207322920596
- type: recall
value: 34.32835820895522
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.5
- type: f1
value: 74.3945115995116
- type: precision
value: 72.82967843459222
- type: recall
value: 78.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34146341463415
- type: f1
value: 61.2469400518181
- type: precision
value: 59.63977756660683
- type: recall
value: 66.34146341463415
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.9
- type: f1
value: 76.90349206349207
- type: precision
value: 75.32921568627451
- type: recall
value: 80.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.93317132442284
- type: f1
value: 81.92519105034295
- type: precision
value: 80.71283920615635
- type: recall
value: 84.93317132442284
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.1304347826087
- type: f1
value: 65.22394755003451
- type: precision
value: 62.912422360248435
- type: recall
value: 71.1304347826087
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.82608695652173
- type: f1
value: 75.55693581780538
- type: precision
value: 73.79420289855072
- type: recall
value: 79.82608695652173
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74
- type: f1
value: 70.51022222222223
- type: precision
value: 69.29673599347512
- type: recall
value: 74
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 74.14238095238095
- type: precision
value: 72.27214285714285
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.97466827503016
- type: f1
value: 43.080330405420874
- type: precision
value: 41.36505499593557
- type: recall
value: 48.97466827503016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.60000000000001
- type: f1
value: 86.62333333333333
- type: precision
value: 85.225
- type: recall
value: 89.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.2
- type: f1
value: 39.5761253006253
- type: precision
value: 37.991358436312
- type: recall
value: 45.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.70333333333333
- type: precision
value: 85.53166666666667
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.095238095238095
- type: f1
value: 44.60650460650461
- type: precision
value: 42.774116796477045
- type: recall
value: 50.095238095238095
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.4
- type: f1
value: 58.35967261904762
- type: precision
value: 56.54857142857143
- type: recall
value: 63.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 87.075
- type: precision
value: 86.12095238095239
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.90333333333334
- type: precision
value: 95.50833333333333
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.9
- type: f1
value: 88.6288888888889
- type: precision
value: 87.61607142857142
- type: recall
value: 90.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.2
- type: f1
value: 60.54377630539395
- type: precision
value: 58.89434482711381
- type: recall
value: 65.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87
- type: f1
value: 84.32412698412699
- type: precision
value: 83.25527777777778
- type: recall
value: 87
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.7
- type: f1
value: 63.07883541295306
- type: precision
value: 61.06117424242426
- type: recall
value: 68.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.78333333333335
- type: precision
value: 90.86666666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 96.96666666666667
- type: precision
value: 96.61666666666667
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27493261455525
- type: f1
value: 85.90745732255168
- type: precision
value: 84.91389637616052
- type: recall
value: 88.27493261455525
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.5982905982906
- type: f1
value: 88.4900284900285
- type: precision
value: 87.57122507122507
- type: recall
value: 90.5982905982906
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.5
- type: f1
value: 86.90769841269842
- type: precision
value: 85.80178571428571
- type: recall
value: 89.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.5
- type: f1
value: 78.36796536796538
- type: precision
value: 76.82196969696969
- type: recall
value: 82.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.48846960167715
- type: f1
value: 66.78771089148448
- type: precision
value: 64.98302885095339
- type: recall
value: 71.48846960167715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.50333333333333
- type: precision
value: 91.77499999999999
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.20622568093385
- type: f1
value: 66.83278891450098
- type: precision
value: 65.35065777283677
- type: recall
value: 71.20622568093385
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.717948717948715
- type: f1
value: 43.53146853146853
- type: precision
value: 42.04721204721204
- type: recall
value: 48.717948717948715
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.5
- type: f1
value: 53.8564991863928
- type: precision
value: 52.40329436122275
- type: recall
value: 58.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.8
- type: f1
value: 88.29
- type: precision
value: 87.09166666666667
- type: recall
value: 90.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.28971962616822
- type: f1
value: 62.63425307817832
- type: precision
value: 60.98065939771546
- type: recall
value: 67.28971962616822
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.7
- type: f1
value: 75.5264472455649
- type: precision
value: 74.38205086580086
- type: recall
value: 78.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.7
- type: f1
value: 86.10809523809525
- type: precision
value: 85.07602564102565
- type: recall
value: 88.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.99999999999999
- type: f1
value: 52.85487521402737
- type: precision
value: 51.53985162713104
- type: recall
value: 56.99999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94
- type: f1
value: 92.45333333333333
- type: precision
value: 91.79166666666667
- type: recall
value: 94
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.61333333333333
- type: precision
value: 89.83333333333331
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34555555555555
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.2
- type: f1
value: 76.6563035113035
- type: precision
value: 75.3014652014652
- type: recall
value: 80.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.7
- type: f1
value: 82.78689263765207
- type: precision
value: 82.06705086580087
- type: recall
value: 84.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 50.33333333333333
- type: f1
value: 45.461523661523664
- type: precision
value: 43.93545574795575
- type: recall
value: 50.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.6000000000000005
- type: f1
value: 5.442121400446441
- type: precision
value: 5.146630385487529
- type: recall
value: 6.6000000000000005
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85
- type: f1
value: 81.04666666666667
- type: precision
value: 79.25
- type: recall
value: 85
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.32142857142857
- type: f1
value: 42.333333333333336
- type: precision
value: 40.69196428571429
- type: recall
value: 47.32142857142857
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 30.735455543358945
- type: f1
value: 26.73616790022338
- type: precision
value: 25.397823220451283
- type: recall
value: 30.735455543358945
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 25.1
- type: f1
value: 21.975989896371022
- type: precision
value: 21.059885632257203
- type: recall
value: 25.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.75666666666666
- type: precision
value: 92.06166666666665
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.74
- type: precision
value: 92.09166666666667
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.3
- type: f1
value: 66.922442002442
- type: precision
value: 65.38249567099568
- type: recall
value: 71.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 40.300000000000004
- type: f1
value: 35.78682789299971
- type: precision
value: 34.66425128716588
- type: recall
value: 40.300000000000004
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.82333333333334
- type: precision
value: 94.27833333333334
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 51.1
- type: f1
value: 47.179074753133584
- type: precision
value: 46.06461044702424
- type: recall
value: 51.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.7
- type: f1
value: 84.71
- type: precision
value: 83.46166666666667
- type: recall
value: 87.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.68333333333334
- type: precision
value: 94.13333333333334
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.39999999999999
- type: f1
value: 82.5577380952381
- type: precision
value: 81.36833333333334
- type: recall
value: 85.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 21.16788321167883
- type: f1
value: 16.948865627297987
- type: precision
value: 15.971932568647897
- type: recall
value: 21.16788321167883
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.9
- type: f1
value: 5.515526831658907
- type: precision
value: 5.141966366966367
- type: recall
value: 6.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39666666666668
- type: precision
value: 90.58666666666667
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 89.95666666666666
- type: precision
value: 88.92833333333333
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.76190476190477
- type: f1
value: 74.93386243386244
- type: precision
value: 73.11011904761904
- type: recall
value: 79.76190476190477
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.799999999999999
- type: f1
value: 6.921439712248537
- type: precision
value: 6.489885109680683
- type: recall
value: 8.799999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.75569358178054
- type: f1
value: 40.34699501312631
- type: precision
value: 38.57886764719063
- type: recall
value: 45.75569358178054
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.4
- type: f1
value: 89.08333333333333
- type: precision
value: 88.01666666666668
- type: recall
value: 91.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.06690476190477
- type: precision
value: 91.45095238095239
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 7.5
- type: f1
value: 6.200363129378736
- type: precision
value: 5.89115314822466
- type: recall
value: 7.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 73.59307359307358
- type: f1
value: 68.38933553219267
- type: precision
value: 66.62698412698413
- type: recall
value: 73.59307359307358
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.8473282442748
- type: f1
value: 64.72373682297346
- type: precision
value: 62.82834214131924
- type: recall
value: 69.8473282442748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5254730713246
- type: f1
value: 96.72489082969432
- type: precision
value: 96.33672974284326
- type: recall
value: 97.5254730713246
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.6
- type: f1
value: 72.42746031746033
- type: precision
value: 71.14036630036631
- type: recall
value: 75.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.24293785310734
- type: f1
value: 88.86064030131826
- type: precision
value: 87.73540489642184
- type: recall
value: 91.24293785310734
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.2
- type: f1
value: 4.383083659794954
- type: precision
value: 4.027861324289673
- type: recall
value: 6.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 84.09428571428572
- type: precision
value: 83.00333333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.699999999999996
- type: f1
value: 56.1584972394755
- type: precision
value: 54.713456330903135
- type: recall
value: 60.699999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.2
- type: f1
value: 80.66190476190475
- type: precision
value: 79.19690476190476
- type: recall
value: 84.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.33
- type: precision
value: 90.45
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 6.3
- type: f1
value: 5.126828976748276
- type: precision
value: 4.853614328966668
- type: recall
value: 6.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.76943699731903
- type: f1
value: 77.82873739308057
- type: precision
value: 76.27622452019234
- type: recall
value: 81.76943699731903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.30000000000001
- type: f1
value: 90.29666666666665
- type: precision
value: 89.40333333333334
- type: recall
value: 92.30000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 29.249011857707508
- type: f1
value: 24.561866096392947
- type: precision
value: 23.356583740215456
- type: recall
value: 29.249011857707508
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.23943661971832
- type: precision
value: 71.66666666666667
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.35928143712575
- type: f1
value: 15.997867865075824
- type: precision
value: 14.882104658301346
- type: recall
value: 20.35928143712575
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.2
- type: f1
value: 90.25999999999999
- type: precision
value: 89.45333333333335
- type: recall
value: 92.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 23.15270935960591
- type: f1
value: 19.65673625772148
- type: precision
value: 18.793705293464992
- type: recall
value: 23.15270935960591
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.154929577464785
- type: f1
value: 52.3868463305083
- type: precision
value: 50.14938113529662
- type: recall
value: 59.154929577464785
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 70.51282051282051
- type: f1
value: 66.8089133089133
- type: precision
value: 65.37645687645687
- type: recall
value: 70.51282051282051
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93
- type: precision
value: 92.23333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.62212943632568
- type: f1
value: 34.3278276962583
- type: precision
value: 33.07646935732408
- type: recall
value: 38.62212943632568
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 28.1
- type: f1
value: 23.579609223054604
- type: precision
value: 22.39622774921555
- type: recall
value: 28.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.27361563517914
- type: f1
value: 85.12486427795874
- type: precision
value: 83.71335504885994
- type: recall
value: 88.27361563517914
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.6
- type: f1
value: 86.39928571428571
- type: precision
value: 85.4947557997558
- type: recall
value: 88.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.77952380952381
- type: precision
value: 82.67602564102565
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.52755905511812
- type: f1
value: 75.3055868016498
- type: precision
value: 73.81889763779527
- type: recall
value: 79.52755905511812
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.76261904761905
- type: precision
value: 72.11670995670995
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 53.8781163434903
- type: f1
value: 47.25804051288816
- type: precision
value: 45.0603482390186
- type: recall
value: 53.8781163434903
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.10000000000001
- type: f1
value: 88.88
- type: precision
value: 87.96333333333334
- type: recall
value: 91.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 38.46153846153847
- type: f1
value: 34.43978243978244
- type: precision
value: 33.429487179487175
- type: recall
value: 38.46153846153847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.9
- type: f1
value: 86.19888888888887
- type: precision
value: 85.07440476190476
- type: recall
value: 88.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.9
- type: f1
value: 82.58857142857143
- type: precision
value: 81.15666666666667
- type: recall
value: 85.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.8
- type: f1
value: 83.36999999999999
- type: precision
value: 81.86833333333333
- type: recall
value: 86.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.51415094339622
- type: f1
value: 63.195000099481234
- type: precision
value: 61.394033442972116
- type: recall
value: 68.51415094339622
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.5
- type: f1
value: 86.14603174603175
- type: precision
value: 85.1162037037037
- type: recall
value: 88.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.62043795620438
- type: f1
value: 94.40389294403892
- type: precision
value: 93.7956204379562
- type: recall
value: 95.62043795620438
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.8
- type: f1
value: 78.6532178932179
- type: precision
value: 77.46348795840176
- type: recall
value: 81.8
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.603
- type: map_at_10
value: 8.5
- type: map_at_100
value: 12.985
- type: map_at_1000
value: 14.466999999999999
- type: map_at_3
value: 4.859999999999999
- type: map_at_5
value: 5.817
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 42.331
- type: mrr_at_100
value: 43.592999999999996
- type: mrr_at_1000
value: 43.592999999999996
- type: mrr_at_3
value: 38.435
- type: mrr_at_5
value: 39.966
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 21.353
- type: ndcg_at_100
value: 31.087999999999997
- type: ndcg_at_1000
value: 43.163000000000004
- type: ndcg_at_3
value: 22.999
- type: ndcg_at_5
value: 21.451
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 19.387999999999998
- type: precision_at_100
value: 6.265
- type: precision_at_1000
value: 1.4160000000000001
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 21.224
- type: recall_at_1
value: 2.603
- type: recall_at_10
value: 14.474
- type: recall_at_100
value: 40.287
- type: recall_at_1000
value: 76.606
- type: recall_at_3
value: 5.978
- type: recall_at_5
value: 7.819
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 69.7848
- type: ap
value: 13.661023167088224
- type: f1
value: 53.61686134460943
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.28183361629882
- type: f1
value: 61.55481034919965
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.972128420092396
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 85.59933241938367
- type: cos_sim_ap
value: 72.20760361208136
- type: cos_sim_f1
value: 66.4447731755424
- type: cos_sim_precision
value: 62.35539102267469
- type: cos_sim_recall
value: 71.10817941952506
- type: dot_accuracy
value: 78.98313166835548
- type: dot_ap
value: 44.492521645493795
- type: dot_f1
value: 45.814889336016094
- type: dot_precision
value: 37.02439024390244
- type: dot_recall
value: 60.07915567282321
- type: euclidean_accuracy
value: 85.3907134767837
- type: euclidean_ap
value: 71.53847289080343
- type: euclidean_f1
value: 65.95952206778834
- type: euclidean_precision
value: 61.31006346328196
- type: euclidean_recall
value: 71.37203166226914
- type: manhattan_accuracy
value: 85.40859510043511
- type: manhattan_ap
value: 71.49664104395515
- type: manhattan_f1
value: 65.98569969356485
- type: manhattan_precision
value: 63.928748144482924
- type: manhattan_recall
value: 68.17941952506597
- type: max_accuracy
value: 85.59933241938367
- type: max_ap
value: 72.20760361208136
- type: max_f1
value: 66.4447731755424
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.83261536073273
- type: cos_sim_ap
value: 85.48178133644264
- type: cos_sim_f1
value: 77.87816307403935
- type: cos_sim_precision
value: 75.88953021114926
- type: cos_sim_recall
value: 79.97382198952879
- type: dot_accuracy
value: 79.76287499514883
- type: dot_ap
value: 59.17438838475084
- type: dot_f1
value: 56.34566667855996
- type: dot_precision
value: 52.50349092359864
- type: dot_recall
value: 60.794579611949494
- type: euclidean_accuracy
value: 88.76857996662397
- type: euclidean_ap
value: 85.22764834359887
- type: euclidean_f1
value: 77.65379751543554
- type: euclidean_precision
value: 75.11152683839401
- type: euclidean_recall
value: 80.37419156144134
- type: manhattan_accuracy
value: 88.6987231730508
- type: manhattan_ap
value: 85.18907981724007
- type: manhattan_f1
value: 77.51967028849757
- type: manhattan_precision
value: 75.49992701795358
- type: manhattan_recall
value: 79.65044656606098
- type: max_accuracy
value: 88.83261536073273
- type: max_ap
value: 85.48178133644264
- type: max_f1
value: 77.87816307403935
---
## Multilingual-E5-base
[Text Embeddings by Weakly-Supervised Contrastive Pre-training](https://arxiv.org/pdf/2212.03533.pdf).
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022
This model has 12 layers and the embedding size is 768.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
```python
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def average_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
# Each input text should start with "query: " or "passage: ", even for non-English texts.
# For tasks other than retrieval, you can simply use the "query: " prefix.
input_texts = ['query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右,放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"]
tokenizer = AutoTokenizer.from_pretrained('intfloat/multilingual-e5-base')
model = AutoModel.from_pretrained('intfloat/multilingual-e5-base')
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Supported Languages
This model is initialized from [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
and continually trained on a mixture of multilingual datasets.
It supports 100 languages from xlm-roberta,
but low-resource languages may see performance degradation.
## Training Details
**Initialization**: [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
**First stage**: contrastive pre-training with weak supervision
| Dataset | Weak supervision | # of text pairs |
|--------------------------------------------------------------------------------------------------------|---------------------------------------|-----------------|
| Filtered [mC4](https://huggingface.co/datasets/mc4) | (title, page content) | 1B |
| [CC News](https://huggingface.co/datasets/intfloat/multilingual_cc_news) | (title, news content) | 400M |
| [NLLB](https://huggingface.co/datasets/allenai/nllb) | translation pairs | 2.4B |
| [Wikipedia](https://huggingface.co/datasets/intfloat/wikipedia) | (hierarchical section title, passage) | 150M |
| Filtered [Reddit](https://www.reddit.com/) | (comment, response) | 800M |
| [S2ORC](https://github.com/allenai/s2orc) | (title, abstract) and citation pairs | 100M |
| [Stackexchange](https://stackexchange.com/) | (question, answer) | 50M |
| [xP3](https://huggingface.co/datasets/bigscience/xP3) | (input prompt, response) | 80M |
| [Miscellaneous unsupervised SBERT data](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | - | 10M |
**Second stage**: supervised fine-tuning
| Dataset | Language | # of text pairs |
|----------------------------------------------------------------------------------------|--------------|-----------------|
| [MS MARCO](https://microsoft.github.io/msmarco/) | English | 500k |
| [NQ](https://github.com/facebookresearch/DPR) | English | 70k |
| [Trivia QA](https://github.com/facebookresearch/DPR) | English | 60k |
| [NLI from SimCSE](https://github.com/princeton-nlp/SimCSE) | English | <300k |
| [ELI5](https://huggingface.co/datasets/eli5) | English | 500k |
| [DuReader Retrieval](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval) | Chinese | 86k |
| [KILT Fever](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [KILT HotpotQA](https://huggingface.co/datasets/kilt_tasks) | English | 70k |
| [SQuAD](https://huggingface.co/datasets/squad) | English | 87k |
| [Quora](https://huggingface.co/datasets/quora) | English | 150k |
| [Mr. TyDi](https://huggingface.co/datasets/castorini/mr-tydi) | 11 languages | 50k |
| [MIRACL](https://huggingface.co/datasets/miracl/miracl) | 16 languages | 40k |
For all labeled datasets, we only use its training set for fine-tuning.
For other training details, please refer to our paper at [https://arxiv.org/pdf/2212.03533.pdf](https://arxiv.org/pdf/2212.03533.pdf).
## Benchmark Results on [Mr. TyDi](https://arxiv.org/abs/2108.08787)
| Model | Avg MRR@10 | | ar | bn | en | fi | id | ja | ko | ru | sw | te | th |
|-----------------------|------------|-------|------| --- | --- | --- | --- | --- | --- | --- |------| --- | --- |
| BM25 | 33.3 | | 36.7 | 41.3 | 15.1 | 28.8 | 38.2 | 21.7 | 28.1 | 32.9 | 39.6 | 42.4 | 41.7 |
| mDPR | 16.7 | | 26.0 | 25.8 | 16.2 | 11.3 | 14.6 | 18.1 | 21.9 | 18.5 | 7.3 | 10.6 | 13.5 |
| BM25 + mDPR | 41.7 | | 49.1 | 53.5 | 28.4 | 36.5 | 45.5 | 35.5 | 36.2 | 42.7 | 40.5 | 42.0 | 49.2 |
| | |
| multilingual-e5-small | 64.4 | | 71.5 | 66.3 | 54.5 | 57.7 | 63.2 | 55.4 | 54.3 | 60.8 | 65.4 | 89.1 | 70.1 |
| multilingual-e5-base | 65.9 | | 72.3 | 65.0 | 58.5 | 60.8 | 64.9 | 56.6 | 55.8 | 62.7 | 69.0 | 86.6 | 72.7 |
| multilingual-e5-large | **70.5** | | 77.5 | 73.2 | 60.8 | 66.8 | 68.5 | 62.5 | 61.6 | 65.8 | 72.7 | 90.2 | 76.2 |
## MTEB Benchmark Evaluation
Check out [unilm/e5](https://github.com/microsoft/unilm/tree/master/e5) to reproduce evaluation results
on the [BEIR](https://arxiv.org/abs/2104.08663) and [MTEB benchmark](https://arxiv.org/abs/2210.07316).
## Support for Sentence Transformers
Below is an example for usage with sentence_transformers.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/multilingual-e5-base')
input_texts = [
'query: how much protein should a female eat',
'query: 南瓜的家常做法',
"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 i s 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or traini ng for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"passage: 1.清炒南瓜丝 原料:嫩南瓜半个 调料:葱、盐、白糖、鸡精 做法: 1、南瓜用刀薄薄的削去表面一层皮 ,用勺子刮去瓤 2、擦成细丝(没有擦菜板就用刀慢慢切成细丝) 3、锅烧热放油,入葱花煸出香味 4、入南瓜丝快速翻炒一分钟左右, 放盐、一点白糖和鸡精调味出锅 2.香葱炒南瓜 原料:南瓜1只 调料:香葱、蒜末、橄榄油、盐 做法: 1、将南瓜去皮,切成片 2、油 锅8成热后,将蒜末放入爆香 3、爆香后,将南瓜片放入,翻炒 4、在翻炒的同时,可以不时地往锅里加水,但不要太多 5、放入盐,炒匀 6、南瓜差不多软和绵了之后,就可以关火 7、撒入香葱,即可出锅"
]
embeddings = model.encode(input_texts, normalize_embeddings=True)
```
Package requirements
`pip install sentence_transformers~=2.2.2`
Contributors: [michaelfeil](https://huggingface.co/michaelfeil)
## FAQ
**1. Do I need to add the prefix "query: " and "passage: " to input texts?**
Yes, this is how the model is trained, otherwise you will see a performance degradation.
Here are some rules of thumb:
- Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
- Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
- Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
**2. Why are my reproduced results slightly different from reported in the model card?**
Different versions of `transformers` and `pytorch` could cause negligible but non-zero performance differences.
**3. Why does the cosine similarity scores distribute around 0.7 to 1.0?**
This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
For text embedding tasks like text retrieval or semantic similarity,
what matters is the relative order of the scores instead of the absolute values,
so this should not be an issue.
## Citation
If you find our paper or models helpful, please consider cite as follows:
```
@article{wang2022text,
title={Text Embeddings by Weakly-Supervised Contrastive Pre-training},
author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Jiao, Binxing and Yang, Linjun and Jiang, Daxin and Majumder, Rangan and Wei, Furu},
journal={arXiv preprint arXiv:2212.03533},
year={2022}
}
```
## Limitations
Long texts will be truncated to at most 512 tokens.
| [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
YiDuo1999/Llama-3-Physician-8B-Instruct | YiDuo1999 | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-20T13:04:37 | 2024-07-02T10:05:43 | 46 | 5 | ---
license: llama3
---
The official instruct model weights for "Efficient Continual Pre-training by Mitigating the Stability Gap".
## Introduction
This repo contains Llama-3-Physician-8B-Instruct, a medical language model with 8 billion parameters. This model builds upon the foundation of Llama 3 and has been firstly continual pretrained on high-quality medical sub-corpus from the RefinedWeb dataset and then tuned with diverse medical and general instructions. We also use the three strategies in the paper to mitigate the stability gap during continual pretraining and instruction tuning, which boosts the model's medical task performance and reduces the computation consumption.
## 💻 Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch
model_name = "YiDuo1999/Llama-3-Physician-8B-Instruct"
device_map = 'auto'
model = AutoModelForCausalLM.from_pretrained( model_name, trust_remote_code=True,use_cache=False,device_map=device_map)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
def askme(question):
sys_message = '''
You are an AI Medical Assistant trained on a vast dataset of health information. Please be thorough and
provide an informative answer. If you don't know the answer to a specific medical inquiry, advise seeking professional help.
'''
# Create messages structured for the chat template
messages = [{"role": "system", "content": sys_message}, {"role": "user", "content": question}]
# Applying chat template
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100, use_cache=True)
# Extract and return the generated text, removing the prompt
response_text = tokenizer.batch_decode(outputs)[0].strip()
answer = response_text.split('<|im_start|>assistant')[-1].strip()
return answer
# Example usage
# - Context: First describe your problem.
# - Question: Then make the question.
question = '''What is HIV?'''
print(askme(question))
```
the type of answer is :
```
HIV, or Human Immunodeficiency Virus, is a retrovirus that primarily infects cells of the human immune system, particularly CD4+ T cells, which are crucial to the body's ability to fight off infection. HIV infection can lead to AIDS, or Acquired Immune Deficiency Syndrome, a condition that causes severe damage to the immune system and makes individuals more susceptible to life-threatening infections. HIV
is transmitted through sexual contact, sharing needles, or through mother-to-child transmission during pregnancy.
```
## 🏆 Evaluation
For question-answering tasks, we have
| Model | MMLU-Medical | PubMedQA | MedMCQA | MedQA-4-Option | Avg |
|:--------------------------------|:--------------|:----------|:---------|:----------------|:------|
| Mistral-7B-instruct | 55.8 | 17.8 | 40.2 | 41.1 | 37.5 |
| Zephyr-7B-instruct-β | 63.3 | 46.0 | 43.0 | 48.5 | 48.7 |
| PMC-Llama-7B | 59.7 | 59.2 | 57.6 | 49.2 | 53.6 |
| Medalpaca-13B | 55.2 | 50.4 | 21.2 | 20.2 | 36.7 |
| AlpaCare-13B | 60.2 | 53.8 | 38.5 | 30.4 | 45.7 |
| BioMedGPT-LM 7B | 52.0 | 58.6 | 34.9 | 39.3 | 46.2 |
| Me-Llama-13B | - | 70.0 | 44.9 | 42.7 | - |
| Llama-3-8B instruct | 82.0 | 74.6 | 57.1 | 60.3 | 68.5 |
| JSL-Med-Sft-Llama-3-8B | 83.0 | 75.4 | 57.5 | 74.8 | 72.7 |
| GPT-3.5-turbo-1106 | 74.0 | 72.6 | 34.9 | 39.3 | 60.6 |
| GPT-4 | 85.5 | 69.2 | 69.5 | 83.9 | 77.0 |
| Llama-3-physician-8B instruct (ours) | 80.0 | 76.0 | 80.2 | 60.3 | 74.1 |
For Medical claasification, relation extraction, natural language inference, summarization tasks, we have
| Task type | Classification | Relation extraction | Natural Language Inference | Summarization |
|:--------------------------------|:----------------|:----------------------|:----------------------------|:---------------|
| Datasets | HOC | DDI-2013 | BioNLI | MIMIC-CXR |
| Mistral-7B-instruct | 35.8 | 14.1 | 16.7 | 12.5 |
| Zephyr-7B-instruct-β | 26.1 | 19.4 | 19.9 | 10.5 |
| PMC-Llama-7B | 18.4 | 14.7 | 15.9 | 13.9 |
| Medalpaca-13B | 24.6 | 5.8 | 16.4 | 1.0 |
| AlpaCare-13B | 26.7 | 11.0 | 17.0 | 13.4 |
| BioMedGPT-LM 7B | 23.4 | 15.5 | 17.9 | 6.2 |
| Me-Llama-13B | 33.5 | 21.4 | 19.5 | 40.0 |
| JSL-Med-Sft-Llama-3-8B | 25.6 | 19.7 | 16.6 | 13.8 |
| Llama-3-8B instruct | 31.0 | 15.1 | 18.8 | 10.3 |
| GPT-3.5-turbo-1106 | 54.5 | 21.6 | 31.7 | 13.5 |
| GPT-4 | 60.2 | 29.2 | 57.8 | 15.2 |
| Llama-3-physician-8B instruct (ours) | 78.9 | 33.6 | 76.2 | 37.7 |
## Citation
```
@inproceedings{Guo2024EfficientCP,
title={Efficient Continual Pre-training by Mitigating the Stability Gap},
author={Yiduo Guo and Jie Fu and Huishuai Zhang and Dongyan Zhao and Yikang Shen},
year={2024},
url={https://api.semanticscholar.org/CorpusID:270688100}
}
``` | [
"RELATION_EXTRACTION",
"SUMMARIZATION"
] | [
"MEDQA",
"PUBMEDQA"
] |
pszemraj/led-large-book-summary-continued | pszemraj | summarization | [
"transformers",
"pytorch",
"safetensors",
"led",
"text2text-generation",
"long document summary",
"book summary",
"booksum",
"summarization",
"en",
"dataset:kmfoda/booksum",
"license:bsd-3-clause",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-09T00:08:08 | 2023-10-05T06:56:11 | 45 | 2 | ---
datasets:
- kmfoda/booksum
language:
- en
library_name: transformers
license:
- bsd-3-clause
- apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- long document summary
- book summary
- booksum
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
- text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
building, and the tallest structure in Paris. Its base is square, measuring 125
metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
the Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building in New York City was
finished in 1930. It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
the Eiffel Tower is the second tallest free-standing structure in France after
the Millau Viaduct.
example_title: eiffel
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
encoder_no_repeat_ngram_size: 4
num_beams: 2
model-index:
- name: pszemraj/led-large-book-summary-continued
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 31.2367
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWI3NzQwMTUxOWRkOGVmZGYwZTkyODIxZmRhM2Y5N2FjYmM2MWEyMDNiN2JmODc3ODExNTAwZjhhZDJkNzNiYyIsInZlcnNpb24iOjF9.EYEvooI7WG94OinI4p5sNiuM1MAFVSYeb2ehv2lGe-B-qR1yvPVBBr7J3iI5UFegZsYciCLA6VRFUe8eQ8KNAg
- type: rouge
value: 5.0148
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzMxYjIzMWY2MTNkODczZWEzOGEzNjYxNzZjMTc0N2U3NmFhMWM5NWFiMzBjZDEwNTFkYjhhMGMwMjliY2JjOSIsInZlcnNpb24iOjF9.DmIc7iNjo5nm_T-uWehMCbcWjgY_WNGdRkiUXdzv96uFIRiVIoW03UspkGfzvjEiKRoa7OM403XZxNXuCjVJCQ
- type: rouge
value: 15.7724
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDUzNzNkYjUxMjE1MzZjMDhkNWE2MmZlMTg0OGM1NDc2M2JlZDJmNDI3M2YyZGM2NmY1ZDZlOWYxMzcyYmExZCIsInZlcnNpb24iOjF9.CVjivCusq1J_tiktqQ-pnsH6iOWdYrf5rwt9wlGoCgw4boXzDVivtHpe0MWlJ5L-XFY75SnrMXeunCBGOwONBQ
- type: rouge
value: 28.494
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTY0MjI3NDNkYzI5ZjA1Nzg5MmE0MzY3OTZkM2U2ZWZkMDBjZjQzMjdjN2Q3Y2NiZjIwNzI1OWJhMzhjYzg4NiIsInZlcnNpb24iOjF9.A0iwWEti-OPFbi9TEpnEpC0rPCLP3Gw3Ns23Lz8e_zi4B_vlGrVW7weofzO8cuGVoC9kS-aJk2a5VGdXYh5KBw
- type: loss
value: 4.777158260345459
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWZkNjdhNGNkNDUyYWNlNDgyNzkxNDdkNTZlOGQ0MmQ3ZGVjYjgwZTk2M2E4NjAwNWZkNGEzMTU2ZWFjMmFmMCIsInZlcnNpb24iOjF9.TTEWfYmpM4VPKn1Jukkwadj6C3HASvzTMJeTLHCHqd5Vr7s0X0PcIKvnyEVycwywFanfrgIg4Pyn0G_IVeYcBg
- type: gen_len
value: 154.1908
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmI3YjZkNTZmMzNjMzMzODlhODFmNWFlNjNmODI0ZjE2ZWNjMzcxMWUyMGMzNzY2MDIzZWIwYTMxODk3M2Q3YiIsInZlcnNpb24iOjF9.nyUANcwiu-sb3vXMFIdzvdDPTBBhJOEQmdu25XSXRgwNSfugKDydAoHy2tdo9ZE8r32xxYDPoutER22APV4PCA
---
# led-large-book-summary: continued
Fine-tuned further to explore if any improvements vs. the default.
## Details
This model is a version of [pszemraj/led-large-book-summary](https://huggingface.co/pszemraj/led-large-book-summary) further fine-tuned for two epochs.
## Usage
It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
```bash
pip install -U textsum
```
```python
from textsum.summarize import Summarizer
model_name = "pszemraj/led-large-book-summary-continued"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 8191
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 2.0
- mixed_precision_training: Native AMP
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"BEAR"
] |
SIRIS-Lab/AIObioEnts-AnatEM-pubmedbert-full | SIRIS-Lab | token-classification | [
"transformers",
"safetensors",
"bert",
"token-classification",
"ner",
"biomedicine",
"base_model:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"base_model:finetune:microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-11-15T18:14:45 | 2024-12-17T12:28:54 | 45 | 0 | ---
base_model:
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
library_name: transformers
license: mit
pipeline_tag: token-classification
tags:
- ner
- biomedicine
---
# AIObioEnts: All-in-one biomedical entities
Biomedical named-entity recognition following the all-in-one NER (AIONER) scheme introduced by [Luo *et al.*](https://doi.org/10.1093/bioinformatics/btad310). This is a straightforward Hugging-Face-compatible implementation without using a decoding head for ease of integration with other pipelines.
**For full details, see the [main GitHub repository](https://github.com/sirisacademic/AIObioEnts/)**
## Anatomical biomedical entities
We have followed the original AIONER training pipeline based on the BioRED dataset along with additional BioRED-compatible datasets for set of core entities (Gene, Disease, Chemical, Species, Variant, Cell line), which we have fine-tuned using a modified version of the latest release of the [AnatEM](https://nactem.ac.uk/anatomytagger/#AnatEM) corpus, and a subset of entities that are of interest to us: *cell*, *cell component*, *tissue*, *muti-tissue structure*, and *organ*, along with the newly-introduced *cancer*. This model corresponds to the implementation based on [BiomedBERT-base pre-trained on both abstracts from PubMed and full-texts articles from PubMedCentral](https://huggingface.co/microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext)
**F1 scores**
The F1 scores on the test set of this modified dataset are shown below:
| | **BiomedBERT-base abstract+fulltext** |
| -------------------------- | :-----------------------------------: |
| **Cell** | 87.76 |
| **Cell component** | 81.74 |
| **Tissue** | 72.26 |
| **Cancer** | 89.29 |
| **Organ** | 84.18 |
| **Multi-tissue structure** | 72.65 |
| | | | |
| **Overall** | 84.22 |
## Usage
The model can be directly used from HuggingFace in a NER pipeline. However, we note that:
- The model was trained on sentence-level data, and it works best when the input is split
- Each sentence to tag must be surrounded by the flag corresponding to the entity type one wishes to identify, as in: `<entity_type>sentence</entity_type>`. In the case of this fine-tuned model, the entity type should be `'ALL'`.
- Since additional `'O'` labels are used in the AIONER scheme, the outputs should be postprocessed before aggregating the tags
We provide helper functions to tag individual texts in the [main repository](https://github.com/sirisacademic/AIObioEnts/)
````python
from tagging_fn import process_one_text
from transformers import pipeline
pipe = pipeline('ner', model='SIRIS-Lab/AIObioEnts-AnatEM-pubmedbert-full', aggregation_strategy='none', device=0)
process_one_text(text_to_tag, pipeline=pipe, entity_type='ALL')
````
## References
[[1] Ling Luo, Chih-Hsuan Wei, Po-Ting Lai, Robert Leaman, Qingyu Chen, and Zhiyong Lu. "AIONER: All-in-one scheme-based biomedical named entity recognition using deep learning." Bioinformatics, Volume 39, Issue 5, May 2023, btad310.](https://doi.org/10.1093/bioinformatics/btad310)
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BIORED"
] |
SEBIS/legal_t5_small_summ_es | SEBIS | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"summarization Spanish model",
"dataset:jrc-acquis",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2022-06-02T19:52:52 | 44 | 0 | ---
datasets:
- jrc-acquis
language: Spanish
tags:
- summarization Spanish model
widget:
- text: '[notificada con el número C(2006) 166] (El texto en lengua portuguesa es
el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto
el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE
del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector
veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando
lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral
ovina en Portugal. La aparición de esta enfermedad puede representar un grave
riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación
de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos
subvencionables que suponen para Portugal la adopción de medidas de urgencia contra
la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello,
el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa
a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre
catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado
varias decisiones para delimitar las zonas de protección y vigilancia y fijar
las condiciones que deben cumplir los animales que vayan a salir de esas zonas;
la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las
zonas de protección y vigilancia en relación con la fiebre catarral ovina y las
condiciones que se aplican a los traslados de animales desde estas zonas o a través
de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en
Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a
las posibilidades de alimentación animal, lo que ha conllevado costes adicionales
para los ganaderos. La situación tiene consecuencias particulares en Portugal,
pues las explotaciones especializadas en reproducción de bovinos y de ovinos están
ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados
de animales, mientras que las especializadas en engorde, que constituyen la salida
lógica de los animales criados en aquéllas, están localizadas fuera de dichas
zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas
para controlar la epidemia, como la realización de estudios epidemiológicos y
la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas
de laboratorio para el control serológico y virológico en el marco de las pruebas
realizadas a los animales antes de su traslado y en el de la vigilancia entomológica.
(6) Portugal y España han presentado pruebas de su cooperación para evitar la
propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De
conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del
Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola
común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas
comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación
y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse
de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8)
El pago de la contribución financiera de la Comunidad se supedita a la realización
efectiva de las acciones programadas y a la presentación por parte de las autoridades
de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero
de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas
de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra
la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica
se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ
de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda
financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución
de la Comunidad, establecida sobre la base del gasto subvencionable calculado
para las medidas de vigilancia epidemiológica. Procede asimismo determinar los
importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas
utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han
cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas
con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las
medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente
de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN:
Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En
el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas
en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria
del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para
la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica,
incluida la adquisición de trampas. 2. El importe máximo de los gastos que se
reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en
el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR
por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica
(RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa.
3. El impuesto sobre el valor añadido se excluirá de la participación financiera
de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los
controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1,
de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte
de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará
a cabo previa presentación por parte de Portugal de justificantes de las pruebas
de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1,
apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La
ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo
a los siguientes elementos: a) una solicitud que contenga los datos especificados
en el anexo, presentada en el plazo establecido en el apartado 2 del presente
artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá
un informe epidemiológico y un informe financiero; c) el resultado de cualquiera
de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado
1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán
estar disponibles para los controles in situ mencionados en la letra c). 2. La
solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico
en un plazo de 60 días naturales a partir de la fecha de notificación de la presente
Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá
un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la
presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero
de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224
de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE)
no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3]
DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión
2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103.
-------------------------------------------------- ANEXO Datos mencionados en
el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número
| Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas
| | | Trampas | | | Total | | -------------------------------------------------- '
---
# legal_t5_small_summ_es model
Model for Summarization of legal text written in Spanish. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). This model is trained on three parallel corpus from jrc-acquis.
## Model description
legal_t5_small_summ_es is based on the `t5-small` model and was trained on a large corpus of parallel text. This is a smaller model, which scales the baseline model of t5 down by using `dmodel = 512`, `dff = 2,048`, 8-headed attention, and only 6 layers each in the encoder and decoder. This variant has about 60 million parameters.
## Intended uses & limitations
The model could be used for summarization of legal texts written in Spanish.
### How to use
Here is how to use this model to summarize legal text written in Spanish in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead, TranslationPipeline
pipeline = TranslationPipeline(
model=AutoModelWithLMHead.from_pretrained("SEBIS/legal_t5_small_summ_es"),
tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path = "SEBIS/legal_t5_small_summ_es", do_lower_case=False,
skip_special_tokens=True),
device=0
)
es_text = "[notificada con el número C(2006) 166] (El texto en lengua portuguesa es el único auténtico) (2006/78/CE) LA COMISIÓN DE LAS COMUNIDADES EUROPEAS, Visto el Tratado constitutivo de la Comunidad Europea, Vista la Decisión 90/424/CEE del Consejo, de 26 de junio de 1990, relativa a determinados gastos en el sector veterinario [1], y, en particular, su artículo 3, apartado 2 bis, Considerando lo siguiente: (1) El 24 de noviembre de 2004 se declararon brotes de fiebre catarral ovina en Portugal. La aparición de esta enfermedad puede representar un grave riesgo para la cabaña ganadera de la Comunidad. (2) Para atajar la propagación de la enfermedad en el plazo más breve, la Comunidad debe participar en los gastos subvencionables que suponen para Portugal la adopción de medidas de urgencia contra la enfermedad, en las condiciones previstas en la Decisión 90/424/CEE. Por ello, el 15 de septiembre de 2005 se adoptó la Decisión 2005/660/CE de la Comisión relativa a una ayuda financiera de la Comunidad para medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005 [2]. (3) La Comisión ha adoptado varias decisiones para delimitar las zonas de protección y vigilancia y fijar las condiciones que deben cumplir los animales que vayan a salir de esas zonas; la última de ellas es la Decisión 2005/393/CE, de 23 de mayo de 2005, sobre las zonas de protección y vigilancia en relación con la fiebre catarral ovina y las condiciones que se aplican a los traslados de animales desde estas zonas o a través de ellas [3]. (4) Desde el otoño de 2004, la excepcional escasez de lluvias en Portugal ha afectado gravemente al suministro de forraje y, en consecuencia, a las posibilidades de alimentación animal, lo que ha conllevado costes adicionales para los ganaderos. La situación tiene consecuencias particulares en Portugal, pues las explotaciones especializadas en reproducción de bovinos y de ovinos están ubicadas en las zonas afectadas por las restricciones aplicadas a los traslados de animales, mientras que las especializadas en engorde, que constituyen la salida lógica de los animales criados en aquéllas, están localizadas fuera de dichas zonas. (5) Portugal, en colaboración con España, puso en marcha otras medidas para controlar la epidemia, como la realización de estudios epidemiológicos y la aplicación de medidas de vigilancia de la enfermedad, incluidas las pruebas de laboratorio para el control serológico y virológico en el marco de las pruebas realizadas a los animales antes de su traslado y en el de la vigilancia entomológica. (6) Portugal y España han presentado pruebas de su cooperación para evitar la propagación de la enfermedad tomando medidas de vigilancia de la misma. (7) De conformidad con el artículo 3, apartado 2, del Reglamento (CE) no 1258/1999 del Consejo, de 17 de mayo de 1999, sobre la financiación de la política agrícola común [4], las medidas veterinarias y fitosanitarias ejecutadas según las normas comunitarias son financiadas por la sección Garantía del Fondo Europeo de Orientación y de Garantía Agrícola. El control financiero de estas acciones debe efectuarse de conformidad con lo dispuesto en los artículos 8 y 9 de dicho Reglamento. (8) El pago de la contribución financiera de la Comunidad se supedita a la realización efectiva de las acciones programadas y a la presentación por parte de las autoridades de toda la información necesaria en los plazos establecidos. (9) El 25 de febrero de 2005, Portugal presentó un primer cálculo de los costes de las demás medidas de urgencia, como las de vigilancia epidemiológica, tomadas para luchar contra la enfermedad. El importe estimado de las medidas de vigilancia epidemiológica se eleva a 4303336 EUR. (10) A la espera de que se efectúen los controles in situ de la Comisión, procede fijar desde ahora el importe de un primer pago de la ayuda financiera de la Comunidad. Este primer pago ha de ser igual al 50 % de la contribución de la Comunidad, establecida sobre la base del gasto subvencionable calculado para las medidas de vigilancia epidemiológica. Procede asimismo determinar los importes máximos que se reembolsarán en concepto de pruebas realizadas y de trampas utilizadas en el marco de dichas medidas. (11) Las autoridades portuguesas han cumplido íntegramente sus obligaciones técnicas y administrativas relacionadas con las medidas previstas en el artículo 3 de la Decisión 90/424/CEE. (12) Las medidas previstas en la presente Decisión se ajustan al dictamen del Comité permanente de la cadena alimentaria y de sanidad animal. HA ADOPTADO LA PRESENTE DECISIÓN: Artículo 1 Concesión de una ayuda financiera de la Comunidad a Portugal 1. En el marco de las medidas de urgencia contra la fiebre catarral ovina adoptadas en Portugal en 2004 y 2005, Portugal tendrá derecho a una contribución comunitaria del 50 % de los importes desembolsados en concepto de pruebas de laboratorio para la vigilancia serológica y virológica, así como en concepto de vigilancia entomológica, incluida la adquisición de trampas. 2. El importe máximo de los gastos que se reembolsarán a Portugal en concepto de las pruebas y las trampas mencionadas en el apartado 1 no excederá de: a) vigilancia serológica, prueba ELISA: 2,5 EUR por prueba; b) vigilancia virológica, reacción en cadena de la polimerasa retrotranscriptásica (RT.PCR): 15 EUR por prueba; c) vigilancia entomológica, trampa: 160 EUR por trampa. 3. El impuesto sobre el valor añadido se excluirá de la participación financiera de la Comunidad. Artículo 2 Modalidades de pago A reserva del resultado de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE, se efectuará un primer pago de 600000 EUR como parte de la ayuda financiera de la Comunidad prevista en el artículo 1. El pago se llevará a cabo previa presentación por parte de Portugal de justificantes de las pruebas de laboratorio y de la adquisición de las trampas mencionadas en el artículo 1, apartado 1. Artículo 3 Condiciones de pago y documentación justificativa 1. La ayuda financiera de la Comunidad contemplada en el artículo 1 se pagará atendiendo a los siguientes elementos: a) una solicitud que contenga los datos especificados en el anexo, presentada en el plazo establecido en el apartado 2 del presente artículo; b) la documentación justificativa mencionada en el artículo 2, que incluirá un informe epidemiológico y un informe financiero; c) el resultado de cualquiera de los controles in situ llevados a cabo de conformidad con el artículo 9, apartado 1, de la Decisión 90/424/CEE. Los documentos mencionados en la letra b) deberán estar disponibles para los controles in situ mencionados en la letra c). 2. La solicitud mencionada en el apartado 1, letra a), se presentará en formato electrónico en un plazo de 60 días naturales a partir de la fecha de notificación de la presente Decisión. Si no se respeta este plazo, la ayuda financiera comunitaria se reducirá un 25 % por cada mes de retraso. Artículo 4 Destinatario El destinatario de la presente Decisión es la República Portuguesa. Hecho en Bruselas, el 31 de enero de 2006. Por la Comisión Markos Kyprianou Miembro de la Comisión [1] DO L 224 de 18.8.1990, p. 19. Decisión modificada en último lugar por el Reglamento (CE) no 806/2003 (DO L 122 de 16.5.2003, p. 1). [2] DO L 244 de 20.9.2005, p. 28. [3] DO L 130 de 24.5.2005, p. 22. Decisión modificada en último lugar por la Decisión 2005/828/CE (DO L 311 de 26.11.2005, p. 37). [4] DO L 160 de 26.6.1999, p. 103. -------------------------------------------------- ANEXO Datos mencionados en el artículo 3, apartado 1, letra a) Gastos | Naturaleza de los costes | Número | Importe (sin IVA) | Pruebas ELISA | | | Pruebas RT.PCR | | | Otras pruebas virológicas | | | Trampas | | | Total | | -------------------------------------------------- "
pipeline([es_text], max_length=512)
```
## Training data
The legal_t5_small_summ_es model was trained on [JRC-ACQUIS](https://wt-public.emm4u.eu/Acquis/index_2.2.html) dataset consisting of 23 Thousand texts.
## Training procedure
The model was trained on a single TPU Pod V3-8 for 250K steps in total, using sequence length 512 (batch size 64). It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture. The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
### Preprocessing
An unigram model trained with 88M lines of text from the parallel corpus (of all possible language pairs) to get the vocabulary (with byte pair encoding), which is used with this model.
### Pretraining
## Evaluation results
When the model is used for classification test dataset, achieves the following results:
Test results :
| Model | Rouge1 | Rouge2 | Rouge Lsum |
|:-----:|:-----:|:-----:|:-----:|
| legal_t5_small_summ_es | 80.23|70.16 |78.69|
### BibTeX entry and citation info
> Created by [Ahmed Elnaggar/@Elnaggar_AI](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/)
| [
"TRANSLATION",
"SUMMARIZATION"
] | [
"PCR"
] |
RichardErkhov/M4-ai_-_tau-0.5B-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-06-24T22:10:28 | 2024-06-24T22:14:48 | 44 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
tau-0.5B - GGUF
- Model creator: https://huggingface.co/M4-ai/
- Original model: https://huggingface.co/M4-ai/tau-0.5B/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [tau-0.5B.Q2_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q2_K.gguf) | Q2_K | 0.23GB |
| [tau-0.5B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_XS.gguf) | IQ3_XS | 0.24GB |
| [tau-0.5B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_S.gguf) | IQ3_S | 0.25GB |
| [tau-0.5B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_S.gguf) | Q3_K_S | 0.25GB |
| [tau-0.5B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ3_M.gguf) | IQ3_M | 0.26GB |
| [tau-0.5B.Q3_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K.gguf) | Q3_K | 0.26GB |
| [tau-0.5B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_M.gguf) | Q3_K_M | 0.26GB |
| [tau-0.5B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q3_K_L.gguf) | Q3_K_L | 0.28GB |
| [tau-0.5B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ4_XS.gguf) | IQ4_XS | 0.28GB |
| [tau-0.5B.Q4_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_0.gguf) | Q4_0 | 0.29GB |
| [tau-0.5B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.IQ4_NL.gguf) | IQ4_NL | 0.29GB |
| [tau-0.5B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K_S.gguf) | Q4_K_S | 0.29GB |
| [tau-0.5B.Q4_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K.gguf) | Q4_K | 0.3GB |
| [tau-0.5B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_K_M.gguf) | Q4_K_M | 0.3GB |
| [tau-0.5B.Q4_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q4_1.gguf) | Q4_1 | 0.3GB |
| [tau-0.5B.Q5_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_0.gguf) | Q5_0 | 0.32GB |
| [tau-0.5B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K_S.gguf) | Q5_K_S | 0.32GB |
| [tau-0.5B.Q5_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K.gguf) | Q5_K | 0.33GB |
| [tau-0.5B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_K_M.gguf) | Q5_K_M | 0.33GB |
| [tau-0.5B.Q5_1.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q5_1.gguf) | Q5_1 | 0.34GB |
| [tau-0.5B.Q6_K.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q6_K.gguf) | Q6_K | 0.36GB |
| [tau-0.5B.Q8_0.gguf](https://huggingface.co/RichardErkhov/M4-ai_-_tau-0.5B-gguf/blob/main/tau-0.5B.Q8_0.gguf) | Q8_0 | 0.47GB |
Original model description:
---
license: other
datasets:
- Locutusque/UltraTextbooks-2.0
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
max_new_tokens: 250
repetition_penalty: 1.1
language:
- en
- zh
---
# tau-0.5B
## Model Details
- **Model Name:** tau-0.5B
- **Base Model:** Qwen1.5-0.5B
- **Dataset:** UltraTextbooks-2.0
- **Model Size:** 0.5B parameters
- **Model Type:** Language Model
- **Training Procedure:** Further pre-training of Qwen1.5-0.5B on UltraTextbooks-2.0.
## Model Use
tau-0.5B is designed to be a general-purpose language model with enhanced capabilities in the domains of machine learning, mathematics, and coding. It can be used for a wide range of natural language processing tasks, such as:
- Educational question answering
- Text summarization
- Content generation for educational purposes
- Code understanding and generation
- Mathematical problem solving
The model's exposure to the diverse content in the UltraTextbooks-2.0 dataset makes it particularly well-suited for applications in educational technology and research.
## Training Data
tau-0.5B was further pre-trained on the UltraTextbooks-2.0 dataset, which is an expanded version of the original UltraTextbooks dataset. UltraTextbooks-2.0 incorporates additional high-quality synthetic and human-written textbooks from various sources on the Hugging Face platform, with a focus on increasing the diversity of content in the domains of machine learning, mathematics, and coding.
For more details on the dataset, please refer to the [UltraTextbooks-2.0 Dataset Card](https://huggingface.co/datasets/Locutusque/UltraTextbooks-2.0).
## Performance and Limitations
Refer to [Evaluation](##Evaluation) for evaluations. It is essential to note that the model may still exhibit biases or inaccuracies present in the training data. Users are encouraged to critically evaluate the model's outputs and report any issues to facilitate continuous improvement.
## Environmental Impact
The training of tau-0.5B required computational resources that contribute to the model's overall environmental impact. However, efforts were made to optimize the training process and minimize the carbon footprint.
## Ethical Considerations
tau-0.5B was trained on a diverse dataset that may contain biases and inaccuracies. Users should be aware of these potential limitations and use the model responsibly. The model should not be used for tasks that could cause harm or discriminate against individuals or groups.
## Evaluation
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------------------------------|-------|------|-----:|--------|-----:|---|-----:|
|agieval_nous |N/A |none | 0|acc |0.2235|± |0.0434|
| | |none | 0|acc_norm|0.2141|± |0.0498|
| - agieval_aqua_rat | 1|none | 0|acc |0.1417|± |0.0219|
| | |none | 0|acc_norm|0.1535|± |0.0227|
| - agieval_logiqa_en | 1|none | 0|acc |0.2796|± |0.0176|
| | |none | 0|acc_norm|0.3118|± |0.0182|
| - agieval_lsat_ar | 1|none | 0|acc |0.2000|± |0.0264|
| | |none | 0|acc_norm|0.1696|± |0.0248|
| - agieval_lsat_lr | 1|none | 0|acc |0.2275|± |0.0186|
| | |none | 0|acc_norm|0.2020|± |0.0178|
| - agieval_lsat_rc | 1|none | 0|acc |0.1487|± |0.0217|
| | |none | 0|acc_norm|0.1561|± |0.0222|
| - agieval_sat_en | 1|none | 0|acc |0.2330|± |0.0295|
| | |none | 0|acc_norm|0.2039|± |0.0281|
| - agieval_sat_en_without_passage| 1|none | 0|acc |0.2524|± |0.0303|
| | |none | 0|acc_norm|0.1942|± |0.0276|
| - agieval_sat_math | 1|none | 0|acc |0.2227|± |0.0281|
| | |none | 0|acc_norm|0.1682|± |0.0253|
| Tasks |Version| Filter |n-shot| Metric |Value | |Stderr|
|---------------------------------------|-------|----------------|-----:|-----------|-----:|---|-----:|
|truthfulqa | 2|none | 0|acc |0.3931|± |0.0143|
|mmlu |N/A |none | 0|acc |0.3642|± |0.0040|
| - humanities |N/A |none | 5|acc |0.3320|± |0.0068|
| - formal_logic | 0|none | 5|acc |0.2619|± |0.0393|
| - high_school_european_history | 0|none | 5|acc |0.4909|± |0.0390|
| - high_school_us_history | 0|none | 5|acc |0.4167|± |0.0346|
| - high_school_world_history | 0|none | 5|acc |0.4641|± |0.0325|
| - international_law | 0|none | 5|acc |0.5537|± |0.0454|
| - jurisprudence | 0|none | 5|acc |0.4167|± |0.0477|
| - logical_fallacies | 0|none | 5|acc |0.2638|± |0.0346|
| - moral_disputes | 0|none | 5|acc |0.3757|± |0.0261|
| - moral_scenarios | 0|none | 5|acc |0.2402|± |0.0143|
| - philosophy | 0|none | 5|acc |0.3794|± |0.0276|
| - prehistory | 0|none | 5|acc |0.3426|± |0.0264|
| - professional_law | 0|none | 5|acc |0.3103|± |0.0118|
| - world_religions | 0|none | 5|acc |0.2807|± |0.0345|
| - other |N/A |none | 5|acc |0.4071|± |0.0088|
| - business_ethics | 0|none | 5|acc |0.4200|± |0.0496|
| - clinical_knowledge | 0|none | 5|acc |0.4491|± |0.0306|
| - college_medicine | 0|none | 5|acc |0.3873|± |0.0371|
| - global_facts | 0|none | 5|acc |0.3600|± |0.0482|
| - human_aging | 0|none | 5|acc |0.3498|± |0.0320|
| - management | 0|none | 5|acc |0.4854|± |0.0495|
| - marketing | 0|none | 5|acc |0.5470|± |0.0326|
| - medical_genetics | 0|none | 5|acc |0.4000|± |0.0492|
| - miscellaneous | 0|none | 5|acc |0.4291|± |0.0177|
| - nutrition | 0|none | 5|acc |0.4183|± |0.0282|
| - professional_accounting | 0|none | 5|acc |0.3582|± |0.0286|
| - professional_medicine | 0|none | 5|acc |0.3015|± |0.0279|
| - virology | 0|none | 5|acc |0.3494|± |0.0371|
| - social_sciences |N/A |none | 5|acc |0.4075|± |0.0088|
| - econometrics | 0|none | 5|acc |0.2719|± |0.0419|
| - high_school_geography | 0|none | 5|acc |0.5000|± |0.0356|
| - high_school_government_and_politics| 0|none | 5|acc |0.4611|± |0.0360|
| - high_school_macroeconomics | 0|none | 5|acc |0.4051|± |0.0249|
| - high_school_microeconomics | 0|none | 5|acc |0.3908|± |0.0317|
| - high_school_psychology | 0|none | 5|acc |0.4239|± |0.0212|
| - human_sexuality | 0|none | 5|acc |0.3893|± |0.0428|
| - professional_psychology | 0|none | 5|acc |0.3399|± |0.0192|
| - public_relations | 0|none | 5|acc |0.4455|± |0.0476|
| - security_studies | 0|none | 5|acc |0.3510|± |0.0306|
| - sociology | 0|none | 5|acc |0.5174|± |0.0353|
| - us_foreign_policy | 0|none | 5|acc |0.5500|± |0.0500|
| - stem |N/A |none | 5|acc |0.3276|± |0.0083|
| - abstract_algebra | 0|none | 5|acc |0.3000|± |0.0461|
| - anatomy | 0|none | 5|acc |0.2889|± |0.0392|
| - astronomy | 0|none | 5|acc |0.3487|± |0.0388|
| - college_biology | 0|none | 5|acc |0.3403|± |0.0396|
| - college_chemistry | 0|none | 5|acc |0.2600|± |0.0441|
| - college_computer_science | 0|none | 5|acc |0.3800|± |0.0488|
| - college_mathematics | 0|none | 5|acc |0.3300|± |0.0473|
| - college_physics | 0|none | 5|acc |0.2745|± |0.0444|
| - computer_security | 0|none | 5|acc |0.4300|± |0.0498|
| - conceptual_physics | 0|none | 5|acc |0.3447|± |0.0311|
| - electrical_engineering | 0|none | 5|acc |0.3931|± |0.0407|
| - elementary_mathematics | 0|none | 5|acc |0.3095|± |0.0238|
| - high_school_biology | 0|none | 5|acc |0.4161|± |0.0280|
| - high_school_chemistry | 0|none | 5|acc |0.2759|± |0.0314|
| - high_school_computer_science | 0|none | 5|acc |0.3100|± |0.0465|
| - high_school_mathematics | 0|none | 5|acc |0.3185|± |0.0284|
| - high_school_physics | 0|none | 5|acc |0.2517|± |0.0354|
| - high_school_statistics | 0|none | 5|acc |0.3009|± |0.0313|
| - machine_learning | 0|none | 5|acc |0.3036|± |0.0436|
|medqa_4options |Yaml |none | 5|acc |0.2687|± |0.0124|
| | |none | 5|acc_norm |0.2687|± |0.0124|
|logieval | 0|get-answer | 5|exact_match|0.3505|± |0.0120|
|gsm8k_cot | 3|strict-match | 8|exact_match|0.0690|± |0.0070|
| | |flexible-extract| 8|exact_match|0.1365|± |0.0095|
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_easy | 1|none | 25|acc |0.5981|± |0.0101|
| | |none | 25|acc_norm|0.5939|± |0.0101|
|arc_challenge| 1|none | 25|acc |0.2688|± |0.0130|
| | |none | 25|acc_norm|0.2969|± |0.0134|
## Usage Rights
Make sure to read Qwen's license before using this model.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"MEDQA"
] |
minishlab/M2V_base_glove_subword | minishlab | null | [
"model2vec",
"onnx",
"safetensors",
"embeddings",
"static-embeddings",
"mteb",
"sentence-transformers",
"en",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:quantized:BAAI/bge-base-en-v1.5",
"license:mit",
"model-index",
"region:us"
] | 2024-10-02T18:18:36 | 2025-01-21T19:18:20 | 44 | 2 | ---
base_model: BAAI/bge-base-en-v1.5
language:
- en
library_name: model2vec
license: mit
tags:
- embeddings
- static-embeddings
- mteb
- sentence-transformers
model-index:
- name: M2V_base_glove_subword
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.4167916041979
- type: ap
value: 18.202949885376736
- type: ap_weighted
value: 18.202949885376736
- type: f1
value: 54.98453722214898
- type: f1_weighted
value: 72.84623161234782
- type: main_score
value: 66.4167916041979
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 68.044776119403
- type: ap
value: 31.604323176091363
- type: ap_weighted
value: 31.604323176091363
- type: f1
value: 62.53323789238326
- type: f1_weighted
value: 71.2243167389672
- type: main_score
value: 68.044776119403
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification (default)
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 67.21602499999999
- type: ap
value: 62.24635378305934
- type: ap_weighted
value: 62.24635378305934
- type: f1
value: 66.68107362746888
- type: f1_weighted
value: 66.68107362746888
- type: main_score
value: 67.21602499999999
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 32.384
- type: f1
value: 32.05276706247388
- type: f1_weighted
value: 32.05276706247388
- type: main_score
value: 32.384
- task:
type: Retrieval
dataset:
name: MTEB ArguAna (default)
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 29.599999999999998
- type: map_at_1
value: 14.438
- type: map_at_10
value: 23.803
- type: map_at_100
value: 24.85
- type: map_at_1000
value: 24.925
- type: map_at_20
value: 24.395
- type: map_at_3
value: 20.519000000000002
- type: map_at_5
value: 22.183
- type: mrr_at_1
value: 14.65149359886202
- type: mrr_at_10
value: 23.8787847998374
- type: mrr_at_100
value: 24.945306088918446
- type: mrr_at_1000
value: 25.019829460538446
- type: mrr_at_20
value: 24.48722055512828
- type: mrr_at_3
value: 20.661450924608815
- type: mrr_at_5
value: 22.254623044096704
- type: nauc_map_at_1000_diff1
value: 11.677995826704251
- type: nauc_map_at_1000_max
value: -1.7036225489906935
- type: nauc_map_at_1000_std
value: 13.608156164552337
- type: nauc_map_at_100_diff1
value: 11.69898827728831
- type: nauc_map_at_100_max
value: -1.6896771319000576
- type: nauc_map_at_100_std
value: 13.657417732243642
- type: nauc_map_at_10_diff1
value: 11.381029737026354
- type: nauc_map_at_10_max
value: -1.7701185174946374
- type: nauc_map_at_10_std
value: 12.878108250073275
- type: nauc_map_at_1_diff1
value: 13.270492079181698
- type: nauc_map_at_1_max
value: -5.320050131923338
- type: nauc_map_at_1_std
value: 9.145476528935111
- type: nauc_map_at_20_diff1
value: 11.636255256667027
- type: nauc_map_at_20_max
value: -1.5972839976414983
- type: nauc_map_at_20_std
value: 13.42888801202754
- type: nauc_map_at_3_diff1
value: 10.870897941570064
- type: nauc_map_at_3_max
value: -3.2129671196535785
- type: nauc_map_at_3_std
value: 11.017585726260462
- type: nauc_map_at_5_diff1
value: 11.323413777040606
- type: nauc_map_at_5_max
value: -2.4760041260478904
- type: nauc_map_at_5_std
value: 12.029899752157688
- type: nauc_mrr_at_1000_diff1
value: 10.742715816971687
- type: nauc_mrr_at_1000_max
value: -1.7753021168425986
- type: nauc_mrr_at_1000_std
value: 13.427125200171295
- type: nauc_mrr_at_100_diff1
value: 10.765635069630173
- type: nauc_mrr_at_100_max
value: -1.7612670077500088
- type: nauc_mrr_at_100_std
value: 13.47656838026296
- type: nauc_mrr_at_10_diff1
value: 10.35632278742462
- type: nauc_mrr_at_10_max
value: -1.9593749415315034
- type: nauc_mrr_at_10_std
value: 12.726659151321748
- type: nauc_mrr_at_1_diff1
value: 12.18980309927674
- type: nauc_mrr_at_1_max
value: -4.630938342229097
- type: nauc_mrr_at_1_std
value: 8.958732319219887
- type: nauc_mrr_at_20_diff1
value: 10.689736739154682
- type: nauc_mrr_at_20_max
value: -1.689535123826222
- type: nauc_mrr_at_20_std
value: 13.251612129414687
- type: nauc_mrr_at_3_diff1
value: 9.852214578314367
- type: nauc_mrr_at_3_max
value: -3.33487013011876
- type: nauc_mrr_at_3_std
value: 10.877855458667428
- type: nauc_mrr_at_5_diff1
value: 10.270810271458073
- type: nauc_mrr_at_5_max
value: -2.677309074821081
- type: nauc_mrr_at_5_std
value: 11.882706514806639
- type: nauc_ndcg_at_1000_diff1
value: 12.681360792690615
- type: nauc_ndcg_at_1000_max
value: 0.30517667512214525
- type: nauc_ndcg_at_1000_std
value: 17.50402456957222
- type: nauc_ndcg_at_100_diff1
value: 13.169226394338585
- type: nauc_ndcg_at_100_max
value: 0.7398525127020716
- type: nauc_ndcg_at_100_std
value: 18.85172563798729
- type: nauc_ndcg_at_10_diff1
value: 11.874278269234175
- type: nauc_ndcg_at_10_max
value: 0.742178692340471
- type: nauc_ndcg_at_10_std
value: 15.317281484021455
- type: nauc_ndcg_at_1_diff1
value: 13.270492079181698
- type: nauc_ndcg_at_1_max
value: -5.320050131923338
- type: nauc_ndcg_at_1_std
value: 9.145476528935111
- type: nauc_ndcg_at_20_diff1
value: 12.77788972412781
- type: nauc_ndcg_at_20_max
value: 1.3509880113588073
- type: nauc_ndcg_at_20_std
value: 17.20165293396484
- type: nauc_ndcg_at_3_diff1
value: 10.59415387301215
- type: nauc_ndcg_at_3_max
value: -2.5275550083941534
- type: nauc_ndcg_at_3_std
value: 11.765849158403212
- type: nauc_ndcg_at_5_diff1
value: 11.479181039452788
- type: nauc_ndcg_at_5_max
value: -1.1695551867031702
- type: nauc_ndcg_at_5_std
value: 13.366137540722084
- type: nauc_precision_at_1000_diff1
value: 24.13842177102596
- type: nauc_precision_at_1000_max
value: 15.778091220725535
- type: nauc_precision_at_1000_std
value: 57.991198111902065
- type: nauc_precision_at_100_diff1
value: 21.17988197332234
- type: nauc_precision_at_100_max
value: 10.072329200503201
- type: nauc_precision_at_100_std
value: 44.359368185927
- type: nauc_precision_at_10_diff1
value: 13.619970980685995
- type: nauc_precision_at_10_max
value: 7.683020411909876
- type: nauc_precision_at_10_std
value: 21.79402262800611
- type: nauc_precision_at_1_diff1
value: 13.270492079181698
- type: nauc_precision_at_1_max
value: -5.320050131923338
- type: nauc_precision_at_1_std
value: 9.145476528935111
- type: nauc_precision_at_20_diff1
value: 16.97319915821357
- type: nauc_precision_at_20_max
value: 10.315905315799096
- type: nauc_precision_at_20_std
value: 28.82688927043146
- type: nauc_precision_at_3_diff1
value: 10.02754671342287
- type: nauc_precision_at_3_max
value: -0.8699973044493069
- type: nauc_precision_at_3_std
value: 13.603782123513389
- type: nauc_precision_at_5_diff1
value: 12.084329744277978
- type: nauc_precision_at_5_max
value: 2.074626490481966
- type: nauc_precision_at_5_std
value: 16.608205795807304
- type: nauc_recall_at_1000_diff1
value: 24.138421771026135
- type: nauc_recall_at_1000_max
value: 15.778091220725404
- type: nauc_recall_at_1000_std
value: 57.99119811190208
- type: nauc_recall_at_100_diff1
value: 21.179881973322274
- type: nauc_recall_at_100_max
value: 10.072329200503164
- type: nauc_recall_at_100_std
value: 44.359368185926975
- type: nauc_recall_at_10_diff1
value: 13.619970980685975
- type: nauc_recall_at_10_max
value: 7.683020411909859
- type: nauc_recall_at_10_std
value: 21.794022628006108
- type: nauc_recall_at_1_diff1
value: 13.270492079181698
- type: nauc_recall_at_1_max
value: -5.320050131923338
- type: nauc_recall_at_1_std
value: 9.145476528935111
- type: nauc_recall_at_20_diff1
value: 16.973199158213596
- type: nauc_recall_at_20_max
value: 10.315905315799101
- type: nauc_recall_at_20_std
value: 28.82688927043146
- type: nauc_recall_at_3_diff1
value: 10.02754671342289
- type: nauc_recall_at_3_max
value: -0.869997304449278
- type: nauc_recall_at_3_std
value: 13.603782123513424
- type: nauc_recall_at_5_diff1
value: 12.084329744277952
- type: nauc_recall_at_5_max
value: 2.074626490481952
- type: nauc_recall_at_5_std
value: 16.60820579580728
- type: ndcg_at_1
value: 14.438
- type: ndcg_at_10
value: 29.599999999999998
- type: ndcg_at_100
value: 35.062
- type: ndcg_at_1000
value: 37.266
- type: ndcg_at_20
value: 31.734
- type: ndcg_at_3
value: 22.62
- type: ndcg_at_5
value: 25.643
- type: precision_at_1
value: 14.438
- type: precision_at_10
value: 4.843999999999999
- type: precision_at_100
value: 0.748
- type: precision_at_1000
value: 0.093
- type: precision_at_20
value: 2.841
- type: precision_at_3
value: 9.578000000000001
- type: precision_at_5
value: 7.226000000000001
- type: recall_at_1
value: 14.438
- type: recall_at_10
value: 48.435
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 92.60300000000001
- type: recall_at_20
value: 56.828
- type: recall_at_3
value: 28.733999999999998
- type: recall_at_5
value: 36.131
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P (default)
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 35.46255145204994
- type: v_measure
value: 35.46255145204994
- type: v_measure_std
value: 14.146815377034603
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S (default)
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 26.34189987196252
- type: v_measure
value: 26.34189987196252
- type: v_measure_std
value: 14.798697652139317
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions (default)
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 52.85912447389551
- type: map
value: 52.85912447389551
- type: mrr
value: 66.7957173635844
- type: nAUC_map_diff1
value: 11.291158204891948
- type: nAUC_map_max
value: 14.0571982637716
- type: nAUC_map_std
value: 7.658903761935503
- type: nAUC_mrr_diff1
value: 13.851083215099605
- type: nAUC_mrr_max
value: 19.44964881732576
- type: nAUC_mrr_std
value: 9.313450884539453
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 73.38282679412139
- type: cosine_spearman
value: 75.59389113278942
- type: euclidean_pearson
value: 46.852724684799625
- type: euclidean_spearman
value: 55.00125324086669
- type: main_score
value: 75.59389113278942
- type: manhattan_pearson
value: 45.7988833997748
- type: manhattan_spearman
value: 53.28856361366204
- type: pearson
value: 73.38282679412139
- type: spearman
value: 75.59389113278942
- task:
type: Classification
dataset:
name: MTEB Banking77Classification (default)
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 71.38636363636363
- type: f1
value: 71.55994805461263
- type: f1_weighted
value: 71.55994805461263
- type: main_score
value: 71.38636363636363
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P (default)
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 31.47309865069476
- type: v_measure
value: 31.47309865069476
- type: v_measure_std
value: 0.6360736715097297
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S (default)
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 22.58199120148109
- type: v_measure
value: 22.58199120148109
- type: v_measure_std
value: 1.1055877138914942
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval (default)
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: main_score
value: 28.518
- type: map_at_1
value: 17.355999999999998
- type: map_at_10
value: 24.007
- type: map_at_100
value: 25.016
- type: map_at_1000
value: 25.176
- type: map_at_20
value: 24.457
- type: map_at_3
value: 21.794
- type: map_at_5
value: 23.04
- type: mrr_at_1
value: 22.603719599427755
- type: mrr_at_10
value: 29.108760814769386
- type: mrr_at_100
value: 29.908376499291993
- type: mrr_at_1000
value: 29.994015228435632
- type: mrr_at_20
value: 29.504080407211593
- type: mrr_at_3
value: 27.25321888412018
- type: mrr_at_5
value: 28.233190271816884
- type: nauc_map_at_1000_diff1
value: 47.869786003745816
- type: nauc_map_at_1000_max
value: 27.54096137497838
- type: nauc_map_at_1000_std
value: -7.400161145378304
- type: nauc_map_at_100_diff1
value: 47.84118234991334
- type: nauc_map_at_100_max
value: 27.54904954135266
- type: nauc_map_at_100_std
value: -7.477944025206194
- type: nauc_map_at_10_diff1
value: 47.9735876072791
- type: nauc_map_at_10_max
value: 27.391055282545462
- type: nauc_map_at_10_std
value: -7.809853508011509
- type: nauc_map_at_1_diff1
value: 58.07291238335911
- type: nauc_map_at_1_max
value: 29.491926251716666
- type: nauc_map_at_1_std
value: -7.759388303825668
- type: nauc_map_at_20_diff1
value: 47.98612480482489
- type: nauc_map_at_20_max
value: 27.475036492625026
- type: nauc_map_at_20_std
value: -7.516599563783101
- type: nauc_map_at_3_diff1
value: 49.45201738384499
- type: nauc_map_at_3_max
value: 27.178788486813954
- type: nauc_map_at_3_std
value: -8.675581883315793
- type: nauc_map_at_5_diff1
value: 48.54428206844137
- type: nauc_map_at_5_max
value: 27.04154567160208
- type: nauc_map_at_5_std
value: -7.985715295487552
- type: nauc_mrr_at_1000_diff1
value: 46.574864956985365
- type: nauc_mrr_at_1000_max
value: 28.087519043166832
- type: nauc_mrr_at_1000_std
value: -6.451015366036509
- type: nauc_mrr_at_100_diff1
value: 46.56229597151685
- type: nauc_mrr_at_100_max
value: 28.097330034559143
- type: nauc_mrr_at_100_std
value: -6.475319386029993
- type: nauc_mrr_at_10_diff1
value: 46.72161155094325
- type: nauc_mrr_at_10_max
value: 28.136796558719162
- type: nauc_mrr_at_10_std
value: -6.804592873002316
- type: nauc_mrr_at_1_diff1
value: 55.89633445168951
- type: nauc_mrr_at_1_max
value: 30.47937590769701
- type: nauc_mrr_at_1_std
value: -7.1323488254717935
- type: nauc_mrr_at_20_diff1
value: 46.693169452232546
- type: nauc_mrr_at_20_max
value: 28.140872936089373
- type: nauc_mrr_at_20_std
value: -6.484331458969132
- type: nauc_mrr_at_3_diff1
value: 47.808872121231374
- type: nauc_mrr_at_3_max
value: 28.510278015059086
- type: nauc_mrr_at_3_std
value: -7.418313420962369
- type: nauc_mrr_at_5_diff1
value: 47.00163108991785
- type: nauc_mrr_at_5_max
value: 28.03825046154691
- type: nauc_mrr_at_5_std
value: -7.007540109114421
- type: nauc_ndcg_at_1000_diff1
value: 44.04808574593522
- type: nauc_ndcg_at_1000_max
value: 26.938526842644773
- type: nauc_ndcg_at_1000_std
value: -4.429274627595189
- type: nauc_ndcg_at_100_diff1
value: 43.556532019049136
- type: nauc_ndcg_at_100_max
value: 27.236734895647253
- type: nauc_ndcg_at_100_std
value: -5.869942528569457
- type: nauc_ndcg_at_10_diff1
value: 44.125042380771696
- type: nauc_ndcg_at_10_max
value: 27.283104729889622
- type: nauc_ndcg_at_10_std
value: -7.250075385018749
- type: nauc_ndcg_at_1_diff1
value: 55.89633445168951
- type: nauc_ndcg_at_1_max
value: 30.47937590769701
- type: nauc_ndcg_at_1_std
value: -7.1323488254717935
- type: nauc_ndcg_at_20_diff1
value: 44.41899784089651
- type: nauc_ndcg_at_20_max
value: 27.132007799782926
- type: nauc_ndcg_at_20_std
value: -6.018341603261965
- type: nauc_ndcg_at_3_diff1
value: 46.43333330203715
- type: nauc_ndcg_at_3_max
value: 26.867159196890523
- type: nauc_ndcg_at_3_std
value: -7.989033187697878
- type: nauc_ndcg_at_5_diff1
value: 44.97708505801694
- type: nauc_ndcg_at_5_max
value: 26.53850652652143
- type: nauc_ndcg_at_5_std
value: -7.429040061351512
- type: nauc_precision_at_1000_diff1
value: 10.90587664149544
- type: nauc_precision_at_1000_max
value: 0.7573834415907932
- type: nauc_precision_at_1000_std
value: 4.187233421717695
- type: nauc_precision_at_100_diff1
value: 16.70162637068987
- type: nauc_precision_at_100_max
value: 15.017760634485006
- type: nauc_precision_at_100_std
value: -1.4401234272452257
- type: nauc_precision_at_10_diff1
value: 27.11447978714884
- type: nauc_precision_at_10_max
value: 25.239563326602838
- type: nauc_precision_at_10_std
value: -5.113529015570373
- type: nauc_precision_at_1_diff1
value: 55.89633445168951
- type: nauc_precision_at_1_max
value: 30.47937590769701
- type: nauc_precision_at_1_std
value: -7.1323488254717935
- type: nauc_precision_at_20_diff1
value: 24.467549645043032
- type: nauc_precision_at_20_max
value: 23.51675958880599
- type: nauc_precision_at_20_std
value: -2.2460962355932654
- type: nauc_precision_at_3_diff1
value: 36.99310143703273
- type: nauc_precision_at_3_max
value: 24.28484429048304
- type: nauc_precision_at_3_std
value: -8.294205947711662
- type: nauc_precision_at_5_diff1
value: 32.53111998357926
- type: nauc_precision_at_5_max
value: 23.890361705484153
- type: nauc_precision_at_5_std
value: -6.119004280837306
- type: nauc_recall_at_1000_diff1
value: 26.372327810550182
- type: nauc_recall_at_1000_max
value: 17.386452637452958
- type: nauc_recall_at_1000_std
value: 17.18893134942721
- type: nauc_recall_at_100_diff1
value: 27.138092417145288
- type: nauc_recall_at_100_max
value: 22.704436530088913
- type: nauc_recall_at_100_std
value: -1.0716953053918568
- type: nauc_recall_at_10_diff1
value: 32.41154313152003
- type: nauc_recall_at_10_max
value: 23.2359443305839
- type: nauc_recall_at_10_std
value: -5.002290149250385
- type: nauc_recall_at_1_diff1
value: 58.07291238335911
- type: nauc_recall_at_1_max
value: 29.491926251716666
- type: nauc_recall_at_1_std
value: -7.759388303825668
- type: nauc_recall_at_20_diff1
value: 33.00899946361021
- type: nauc_recall_at_20_max
value: 22.82808333164438
- type: nauc_recall_at_20_std
value: -1.4141291649557204
- type: nauc_recall_at_3_diff1
value: 38.920601224546644
- type: nauc_recall_at_3_max
value: 23.89232056113095
- type: nauc_recall_at_3_std
value: -7.8481952205795995
- type: nauc_recall_at_5_diff1
value: 35.257535866907
- type: nauc_recall_at_5_max
value: 22.164920959223334
- type: nauc_recall_at_5_std
value: -5.9961105131656725
- type: ndcg_at_1
value: 22.604
- type: ndcg_at_10
value: 28.518
- type: ndcg_at_100
value: 33.442
- type: ndcg_at_1000
value: 36.691
- type: ndcg_at_20
value: 29.918
- type: ndcg_at_3
value: 25.278
- type: ndcg_at_5
value: 26.647
- type: precision_at_1
value: 22.604
- type: precision_at_10
value: 5.608
- type: precision_at_100
value: 1.0210000000000001
- type: precision_at_1000
value: 0.163
- type: precision_at_20
value: 3.319
- type: precision_at_3
value: 12.589
- type: precision_at_5
value: 8.984
- type: recall_at_1
value: 17.355999999999998
- type: recall_at_10
value: 36.59
- type: recall_at_100
value: 59.38099999999999
- type: recall_at_1000
value: 81.382
- type: recall_at_20
value: 41.972
- type: recall_at_3
value: 26.183
- type: recall_at_5
value: 30.653000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval (default)
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: main_score
value: 24.698999999999998
- type: map_at_1
value: 16.182
- type: map_at_10
value: 21.187
- type: map_at_100
value: 22.028
- type: map_at_1000
value: 22.147
- type: map_at_20
value: 21.603
- type: map_at_3
value: 19.689999999999998
- type: map_at_5
value: 20.402
- type: mrr_at_1
value: 20.573248407643312
- type: mrr_at_10
value: 25.743301991709615
- type: mrr_at_100
value: 26.466582692758493
- type: mrr_at_1000
value: 26.54213235591294
- type: mrr_at_20
value: 26.116902322631823
- type: mrr_at_3
value: 24.32059447983014
- type: mrr_at_5
value: 24.960721868365162
- type: nauc_map_at_1000_diff1
value: 43.80371326276162
- type: nauc_map_at_1000_max
value: 10.307189223525215
- type: nauc_map_at_1000_std
value: 1.1410206622059031
- type: nauc_map_at_100_diff1
value: 43.80398291664643
- type: nauc_map_at_100_max
value: 10.294039476698776
- type: nauc_map_at_100_std
value: 1.0838400387773035
- type: nauc_map_at_10_diff1
value: 43.987106322737205
- type: nauc_map_at_10_max
value: 10.44970205412866
- type: nauc_map_at_10_std
value: 0.4638949254801207
- type: nauc_map_at_1_diff1
value: 50.262982039499725
- type: nauc_map_at_1_max
value: 11.253389960693605
- type: nauc_map_at_1_std
value: -1.1369036906864514
- type: nauc_map_at_20_diff1
value: 43.86541706002641
- type: nauc_map_at_20_max
value: 10.333426229095483
- type: nauc_map_at_20_std
value: 0.7704746445769103
- type: nauc_map_at_3_diff1
value: 44.96796698986098
- type: nauc_map_at_3_max
value: 10.573187295958576
- type: nauc_map_at_3_std
value: 0.01433549559929614
- type: nauc_map_at_5_diff1
value: 44.245307311061204
- type: nauc_map_at_5_max
value: 10.644568381319045
- type: nauc_map_at_5_std
value: -0.029700274583380155
- type: nauc_mrr_at_1000_diff1
value: 42.327672613522914
- type: nauc_mrr_at_1000_max
value: 11.6999240554554
- type: nauc_mrr_at_1000_std
value: 2.112897885106764
- type: nauc_mrr_at_100_diff1
value: 42.31642286015079
- type: nauc_mrr_at_100_max
value: 11.68787957194085
- type: nauc_mrr_at_100_std
value: 2.105610688222343
- type: nauc_mrr_at_10_diff1
value: 42.467973855007116
- type: nauc_mrr_at_10_max
value: 11.797064798974974
- type: nauc_mrr_at_10_std
value: 1.9779659522730684
- type: nauc_mrr_at_1_diff1
value: 47.71737815016663
- type: nauc_mrr_at_1_max
value: 14.383095652386146
- type: nauc_mrr_at_1_std
value: -0.07474670021285572
- type: nauc_mrr_at_20_diff1
value: 42.3995701621796
- type: nauc_mrr_at_20_max
value: 11.701616710562975
- type: nauc_mrr_at_20_std
value: 2.085148056092746
- type: nauc_mrr_at_3_diff1
value: 42.95240734385427
- type: nauc_mrr_at_3_max
value: 12.039509345325337
- type: nauc_mrr_at_3_std
value: 1.7687962861822382
- type: nauc_mrr_at_5_diff1
value: 42.694804355468115
- type: nauc_mrr_at_5_max
value: 11.929565017206377
- type: nauc_mrr_at_5_std
value: 1.694875246947431
- type: nauc_ndcg_at_1000_diff1
value: 41.00761525475331
- type: nauc_ndcg_at_1000_max
value: 9.858142865194182
- type: nauc_ndcg_at_1000_std
value: 3.670728963648605
- type: nauc_ndcg_at_100_diff1
value: 40.95449329238105
- type: nauc_ndcg_at_100_max
value: 9.326306956218327
- type: nauc_ndcg_at_100_std
value: 2.8868853641438506
- type: nauc_ndcg_at_10_diff1
value: 41.53254984337585
- type: nauc_ndcg_at_10_max
value: 10.057078591477252
- type: nauc_ndcg_at_10_std
value: 1.604308043004992
- type: nauc_ndcg_at_1_diff1
value: 47.71737815016663
- type: nauc_ndcg_at_1_max
value: 14.383095652386146
- type: nauc_ndcg_at_1_std
value: -0.07474670021285572
- type: nauc_ndcg_at_20_diff1
value: 41.440675477881086
- type: nauc_ndcg_at_20_max
value: 9.630011024652227
- type: nauc_ndcg_at_20_std
value: 2.2157732372759256
- type: nauc_ndcg_at_3_diff1
value: 42.46487256960971
- type: nauc_ndcg_at_3_max
value: 11.038048797533829
- type: nauc_ndcg_at_3_std
value: 1.2243654696200774
- type: nauc_ndcg_at_5_diff1
value: 41.83878536100888
- type: nauc_ndcg_at_5_max
value: 10.720801901432624
- type: nauc_ndcg_at_5_std
value: 0.8712149388513847
- type: nauc_precision_at_1000_diff1
value: 1.5865611853545292
- type: nauc_precision_at_1000_max
value: 6.681393322922304
- type: nauc_precision_at_1000_std
value: 14.974673269542507
- type: nauc_precision_at_100_diff1
value: 13.555729326347315
- type: nauc_precision_at_100_max
value: 7.545824391218551
- type: nauc_precision_at_100_std
value: 13.934044415661273
- type: nauc_precision_at_10_diff1
value: 25.53208157998575
- type: nauc_precision_at_10_max
value: 10.861163675534936
- type: nauc_precision_at_10_std
value: 4.879245837329693
- type: nauc_precision_at_1_diff1
value: 47.71737815016663
- type: nauc_precision_at_1_max
value: 14.383095652386146
- type: nauc_precision_at_1_std
value: -0.07474670021285572
- type: nauc_precision_at_20_diff1
value: 22.554580803838196
- type: nauc_precision_at_20_max
value: 9.173222510159171
- type: nauc_precision_at_20_std
value: 8.91005482914735
- type: nauc_precision_at_3_diff1
value: 33.10508327009392
- type: nauc_precision_at_3_max
value: 12.86002329562499
- type: nauc_precision_at_3_std
value: 2.974310102418383
- type: nauc_precision_at_5_diff1
value: 29.21043001216549
- type: nauc_precision_at_5_max
value: 11.911630406472423
- type: nauc_precision_at_5_std
value: 3.0525160145985994
- type: nauc_recall_at_1000_diff1
value: 30.47927917267733
- type: nauc_recall_at_1000_max
value: 7.6799659504807245
- type: nauc_recall_at_1000_std
value: 12.501272715675682
- type: nauc_recall_at_100_diff1
value: 31.37456182815277
- type: nauc_recall_at_100_max
value: 4.3121178276146
- type: nauc_recall_at_100_std
value: 6.610653786295896
- type: nauc_recall_at_10_diff1
value: 35.70919804366768
- type: nauc_recall_at_10_max
value: 7.164595283036483
- type: nauc_recall_at_10_std
value: 2.511197530002145
- type: nauc_recall_at_1_diff1
value: 50.262982039499725
- type: nauc_recall_at_1_max
value: 11.253389960693605
- type: nauc_recall_at_1_std
value: -1.1369036906864514
- type: nauc_recall_at_20_diff1
value: 34.61353209754079
- type: nauc_recall_at_20_max
value: 5.959396627193594
- type: nauc_recall_at_20_std
value: 4.38802472107702
- type: nauc_recall_at_3_diff1
value: 38.54587550067196
- type: nauc_recall_at_3_max
value: 8.303476446370226
- type: nauc_recall_at_3_std
value: 0.918233189682653
- type: nauc_recall_at_5_diff1
value: 36.97453761390672
- type: nauc_recall_at_5_max
value: 8.452744877863443
- type: nauc_recall_at_5_std
value: 0.31182896781455743
- type: ndcg_at_1
value: 20.573
- type: ndcg_at_10
value: 24.698999999999998
- type: ndcg_at_100
value: 28.626
- type: ndcg_at_1000
value: 31.535999999999998
- type: ndcg_at_20
value: 25.971
- type: ndcg_at_3
value: 22.400000000000002
- type: ndcg_at_5
value: 23.153000000000002
- type: precision_at_1
value: 20.573
- type: precision_at_10
value: 4.682
- type: precision_at_100
value: 0.835
- type: precision_at_1000
value: 0.132
- type: precision_at_20
value: 2.806
- type: precision_at_3
value: 10.955
- type: precision_at_5
value: 7.580000000000001
- type: recall_at_1
value: 16.182
- type: recall_at_10
value: 30.410999999999998
- type: recall_at_100
value: 47.94
- type: recall_at_1000
value: 68.073
- type: recall_at_20
value: 35.241
- type: recall_at_3
value: 23.247999999999998
- type: recall_at_5
value: 25.611
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval (default)
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: main_score
value: 34.837
- type: map_at_1
value: 21.804000000000002
- type: map_at_10
value: 30.117
- type: map_at_100
value: 31.022
- type: map_at_1000
value: 31.123
- type: map_at_20
value: 30.592999999999996
- type: map_at_3
value: 27.485
- type: map_at_5
value: 29.015
- type: mrr_at_1
value: 25.391849529780565
- type: mrr_at_10
value: 33.06018311190724
- type: mrr_at_100
value: 33.86542467064614
- type: mrr_at_1000
value: 33.93133191694629
- type: mrr_at_20
value: 33.48454644646544
- type: mrr_at_3
value: 30.700104493207924
- type: mrr_at_5
value: 32.12016718913267
- type: nauc_map_at_1000_diff1
value: 45.5807513160407
- type: nauc_map_at_1000_max
value: 21.915072082554456
- type: nauc_map_at_1000_std
value: -7.325013122158723
- type: nauc_map_at_100_diff1
value: 45.54127845733458
- type: nauc_map_at_100_max
value: 21.90856139725234
- type: nauc_map_at_100_std
value: -7.378234997163831
- type: nauc_map_at_10_diff1
value: 45.56616787985884
- type: nauc_map_at_10_max
value: 21.977377645141427
- type: nauc_map_at_10_std
value: -7.953791461768689
- type: nauc_map_at_1_diff1
value: 50.13523755859727
- type: nauc_map_at_1_max
value: 22.079872106357826
- type: nauc_map_at_1_std
value: -10.517989063520115
- type: nauc_map_at_20_diff1
value: 45.47328572468456
- type: nauc_map_at_20_max
value: 21.907938618532206
- type: nauc_map_at_20_std
value: -7.654370878334637
- type: nauc_map_at_3_diff1
value: 46.64296035971972
- type: nauc_map_at_3_max
value: 21.55745539420763
- type: nauc_map_at_3_std
value: -9.322387704640397
- type: nauc_map_at_5_diff1
value: 45.87814328869891
- type: nauc_map_at_5_max
value: 21.97551177369846
- type: nauc_map_at_5_std
value: -8.442300800960686
- type: nauc_mrr_at_1000_diff1
value: 46.21214184609282
- type: nauc_mrr_at_1000_max
value: 24.121552423232732
- type: nauc_mrr_at_1000_std
value: -5.197081534530456
- type: nauc_mrr_at_100_diff1
value: 46.192209374562324
- type: nauc_mrr_at_100_max
value: 24.117295080133403
- type: nauc_mrr_at_100_std
value: -5.20106321371411
- type: nauc_mrr_at_10_diff1
value: 46.214433219910426
- type: nauc_mrr_at_10_max
value: 24.337609381566494
- type: nauc_mrr_at_10_std
value: -5.539128286307364
- type: nauc_mrr_at_1_diff1
value: 52.2527723494356
- type: nauc_mrr_at_1_max
value: 25.421197106410293
- type: nauc_mrr_at_1_std
value: -7.805349072851469
- type: nauc_mrr_at_20_diff1
value: 46.10135736013422
- type: nauc_mrr_at_20_max
value: 24.17582977429519
- type: nauc_mrr_at_20_std
value: -5.3844233771043255
- type: nauc_mrr_at_3_diff1
value: 47.089100932315574
- type: nauc_mrr_at_3_max
value: 24.589442349183855
- type: nauc_mrr_at_3_std
value: -6.861652459272909
- type: nauc_mrr_at_5_diff1
value: 46.50908152902759
- type: nauc_mrr_at_5_max
value: 24.44902343275474
- type: nauc_mrr_at_5_std
value: -5.90486733129187
- type: nauc_ndcg_at_1000_diff1
value: 44.01232290993056
- type: nauc_ndcg_at_1000_max
value: 21.7547520856293
- type: nauc_ndcg_at_1000_std
value: -2.8320334767530118
- type: nauc_ndcg_at_100_diff1
value: 43.333079641772805
- type: nauc_ndcg_at_100_max
value: 21.696558885860842
- type: nauc_ndcg_at_100_std
value: -3.8168722593708466
- type: nauc_ndcg_at_10_diff1
value: 43.55004080963945
- type: nauc_ndcg_at_10_max
value: 22.437821635174988
- type: nauc_ndcg_at_10_std
value: -6.156552890106106
- type: nauc_ndcg_at_1_diff1
value: 52.2527723494356
- type: nauc_ndcg_at_1_max
value: 25.421197106410293
- type: nauc_ndcg_at_1_std
value: -7.805349072851469
- type: nauc_ndcg_at_20_diff1
value: 43.09035864009835
- type: nauc_ndcg_at_20_max
value: 21.94863122459976
- type: nauc_ndcg_at_20_std
value: -5.4130728717458965
- type: nauc_ndcg_at_3_diff1
value: 45.44710289580689
- type: nauc_ndcg_at_3_max
value: 22.400341906939868
- type: nauc_ndcg_at_3_std
value: -8.619757656107849
- type: nauc_ndcg_at_5_diff1
value: 44.1896655275832
- type: nauc_ndcg_at_5_max
value: 22.587591758610802
- type: nauc_ndcg_at_5_std
value: -7.2269233073063575
- type: nauc_precision_at_1000_diff1
value: 10.365353118490535
- type: nauc_precision_at_1000_max
value: 7.8252547949888545
- type: nauc_precision_at_1000_std
value: 26.55091491372318
- type: nauc_precision_at_100_diff1
value: 21.049854477557055
- type: nauc_precision_at_100_max
value: 16.20485886511922
- type: nauc_precision_at_100_std
value: 15.969890079702717
- type: nauc_precision_at_10_diff1
value: 32.52426180873231
- type: nauc_precision_at_10_max
value: 22.685662047893707
- type: nauc_precision_at_10_std
value: 1.4729404419557324
- type: nauc_precision_at_1_diff1
value: 52.2527723494356
- type: nauc_precision_at_1_max
value: 25.421197106410293
- type: nauc_precision_at_1_std
value: -7.805349072851469
- type: nauc_precision_at_20_diff1
value: 28.090691152210972
- type: nauc_precision_at_20_max
value: 20.90743423717082
- type: nauc_precision_at_20_std
value: 4.817506381512236
- type: nauc_precision_at_3_diff1
value: 40.80538406829336
- type: nauc_precision_at_3_max
value: 23.323105131070363
- type: nauc_precision_at_3_std
value: -5.540716529624683
- type: nauc_precision_at_5_diff1
value: 36.58280618039231
- type: nauc_precision_at_5_max
value: 23.634816479662742
- type: nauc_precision_at_5_std
value: -1.7820384730109589
- type: nauc_recall_at_1000_diff1
value: 34.29190280951983
- type: nauc_recall_at_1000_max
value: 13.798111582798564
- type: nauc_recall_at_1000_std
value: 28.5351988388723
- type: nauc_recall_at_100_diff1
value: 32.064087882086476
- type: nauc_recall_at_100_max
value: 16.090743768333688
- type: nauc_recall_at_100_std
value: 8.307894883910041
- type: nauc_recall_at_10_diff1
value: 35.79378711197085
- type: nauc_recall_at_10_max
value: 20.68575839918982
- type: nauc_recall_at_10_std
value: -2.946830801840792
- type: nauc_recall_at_1_diff1
value: 50.13523755859727
- type: nauc_recall_at_1_max
value: 22.079872106357826
- type: nauc_recall_at_1_std
value: -10.517989063520115
- type: nauc_recall_at_20_diff1
value: 33.44790152149905
- type: nauc_recall_at_20_max
value: 18.594618679781895
- type: nauc_recall_at_20_std
value: -0.31826446038001266
- type: nauc_recall_at_3_diff1
value: 40.94878372307589
- type: nauc_recall_at_3_max
value: 20.42680666854128
- type: nauc_recall_at_3_std
value: -8.903430047857414
- type: nauc_recall_at_5_diff1
value: 37.927274464064844
- type: nauc_recall_at_5_max
value: 21.06930934356292
- type: nauc_recall_at_5_std
value: -5.831090950499156
- type: ndcg_at_1
value: 25.392
- type: ndcg_at_10
value: 34.837
- type: ndcg_at_100
value: 39.291
- type: ndcg_at_1000
value: 41.676
- type: ndcg_at_20
value: 36.416
- type: ndcg_at_3
value: 29.958000000000002
- type: ndcg_at_5
value: 32.435
- type: precision_at_1
value: 25.392
- type: precision_at_10
value: 5.806
- type: precision_at_100
value: 0.8789999999999999
- type: precision_at_1000
value: 0.117
- type: precision_at_20
value: 3.3320000000000003
- type: precision_at_3
value: 13.501
- type: precision_at_5
value: 9.718
- type: recall_at_1
value: 21.804000000000002
- type: recall_at_10
value: 46.367999999999995
- type: recall_at_100
value: 66.526
- type: recall_at_1000
value: 83.795
- type: recall_at_20
value: 52.201
- type: recall_at_3
value: 33.351
- type: recall_at_5
value: 39.345
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval (default)
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: main_score
value: 15.889000000000001
- type: map_at_1
value: 9.472999999999999
- type: map_at_10
value: 13.439
- type: map_at_100
value: 14.165
- type: map_at_1000
value: 14.267
- type: map_at_20
value: 13.778000000000002
- type: map_at_3
value: 12.136
- type: map_at_5
value: 12.803
- type: mrr_at_1
value: 10.056497175141244
- type: mrr_at_10
value: 14.27383194332347
- type: mrr_at_100
value: 15.012089041940587
- type: mrr_at_1000
value: 15.104068046441926
- type: mrr_at_20
value: 14.623929801790952
- type: mrr_at_3
value: 12.86252354048964
- type: mrr_at_5
value: 13.55743879472693
- type: nauc_map_at_1000_diff1
value: 30.334633457872854
- type: nauc_map_at_1000_max
value: 16.879524053860088
- type: nauc_map_at_1000_std
value: -11.608379714877143
- type: nauc_map_at_100_diff1
value: 30.315313717026044
- type: nauc_map_at_100_max
value: 16.85237939531867
- type: nauc_map_at_100_std
value: -11.622151859571831
- type: nauc_map_at_10_diff1
value: 30.914146463660085
- type: nauc_map_at_10_max
value: 16.957132658303777
- type: nauc_map_at_10_std
value: -11.731838090023269
- type: nauc_map_at_1_diff1
value: 38.059077642105095
- type: nauc_map_at_1_max
value: 17.258898457644563
- type: nauc_map_at_1_std
value: -15.1141417910556
- type: nauc_map_at_20_diff1
value: 30.657379748220464
- type: nauc_map_at_20_max
value: 16.728415773059652
- type: nauc_map_at_20_std
value: -11.58808790930077
- type: nauc_map_at_3_diff1
value: 33.46033892507575
- type: nauc_map_at_3_max
value: 17.063496859962274
- type: nauc_map_at_3_std
value: -12.540868416387656
- type: nauc_map_at_5_diff1
value: 31.833328131003665
- type: nauc_map_at_5_max
value: 16.85136559752421
- type: nauc_map_at_5_std
value: -12.482629966798948
- type: nauc_mrr_at_1000_diff1
value: 29.41507065744396
- type: nauc_mrr_at_1000_max
value: 18.49824554052624
- type: nauc_mrr_at_1000_std
value: -10.326025120569037
- type: nauc_mrr_at_100_diff1
value: 29.379801930215717
- type: nauc_mrr_at_100_max
value: 18.488234248143247
- type: nauc_mrr_at_100_std
value: -10.335639545339422
- type: nauc_mrr_at_10_diff1
value: 29.91432794618661
- type: nauc_mrr_at_10_max
value: 18.724879448569546
- type: nauc_mrr_at_10_std
value: -10.404101745775053
- type: nauc_mrr_at_1_diff1
value: 37.90615317749033
- type: nauc_mrr_at_1_max
value: 18.93535243576158
- type: nauc_mrr_at_1_std
value: -13.352192729903559
- type: nauc_mrr_at_20_diff1
value: 29.578605690031328
- type: nauc_mrr_at_20_max
value: 18.407726379219987
- type: nauc_mrr_at_20_std
value: -10.298490989990624
- type: nauc_mrr_at_3_diff1
value: 32.02343883506372
- type: nauc_mrr_at_3_max
value: 18.633783635235847
- type: nauc_mrr_at_3_std
value: -11.228435347275935
- type: nauc_mrr_at_5_diff1
value: 30.69962523728713
- type: nauc_mrr_at_5_max
value: 18.72446829188985
- type: nauc_mrr_at_5_std
value: -11.138830180701982
- type: nauc_ndcg_at_1000_diff1
value: 25.382297853226866
- type: nauc_ndcg_at_1000_max
value: 17.43716304218148
- type: nauc_ndcg_at_1000_std
value: -10.190696887337486
- type: nauc_ndcg_at_100_diff1
value: 24.735480242752285
- type: nauc_ndcg_at_100_max
value: 16.71943454741711
- type: nauc_ndcg_at_100_std
value: -9.924909206899162
- type: nauc_ndcg_at_10_diff1
value: 27.358228148721842
- type: nauc_ndcg_at_10_max
value: 16.922883804711265
- type: nauc_ndcg_at_10_std
value: -10.016699536056024
- type: nauc_ndcg_at_1_diff1
value: 37.90615317749033
- type: nauc_ndcg_at_1_max
value: 18.93535243576158
- type: nauc_ndcg_at_1_std
value: -13.352192729903559
- type: nauc_ndcg_at_20_diff1
value: 26.463382227572517
- type: nauc_ndcg_at_20_max
value: 16.22031339406569
- type: nauc_ndcg_at_20_std
value: -9.66724467521929
- type: nauc_ndcg_at_3_diff1
value: 31.53806923827287
- type: nauc_ndcg_at_3_max
value: 17.049495750298107
- type: nauc_ndcg_at_3_std
value: -11.58504512374531
- type: nauc_ndcg_at_5_diff1
value: 29.10131680215961
- type: nauc_ndcg_at_5_max
value: 16.786497467751296
- type: nauc_ndcg_at_5_std
value: -11.594059282963107
- type: nauc_precision_at_1000_diff1
value: 5.724183211042247
- type: nauc_precision_at_1000_max
value: 22.481314169026508
- type: nauc_precision_at_1000_std
value: -2.4780053135041844
- type: nauc_precision_at_100_diff1
value: 8.982535905232872
- type: nauc_precision_at_100_max
value: 19.23627381958997
- type: nauc_precision_at_100_std
value: -6.469375758025859
- type: nauc_precision_at_10_diff1
value: 18.446003934213422
- type: nauc_precision_at_10_max
value: 18.317564090743698
- type: nauc_precision_at_10_std
value: -5.258776187738409
- type: nauc_precision_at_1_diff1
value: 37.90615317749033
- type: nauc_precision_at_1_max
value: 18.93535243576158
- type: nauc_precision_at_1_std
value: -13.352192729903559
- type: nauc_precision_at_20_diff1
value: 16.32313052813914
- type: nauc_precision_at_20_max
value: 16.623118796672443
- type: nauc_precision_at_20_std
value: -5.178876021009233
- type: nauc_precision_at_3_diff1
value: 28.153843298140956
- type: nauc_precision_at_3_max
value: 18.261053599119773
- type: nauc_precision_at_3_std
value: -8.633656740784398
- type: nauc_precision_at_5_diff1
value: 22.30147327973116
- type: nauc_precision_at_5_max
value: 17.724668119940276
- type: nauc_precision_at_5_std
value: -9.147827083942738
- type: nauc_recall_at_1000_diff1
value: 12.936742845571006
- type: nauc_recall_at_1000_max
value: 17.728147389670845
- type: nauc_recall_at_1000_std
value: -10.026543773605697
- type: nauc_recall_at_100_diff1
value: 12.196046010910255
- type: nauc_recall_at_100_max
value: 14.320146451643033
- type: nauc_recall_at_100_std
value: -7.059868030131276
- type: nauc_recall_at_10_diff1
value: 19.81974166368456
- type: nauc_recall_at_10_max
value: 15.137717469839288
- type: nauc_recall_at_10_std
value: -6.894031649742936
- type: nauc_recall_at_1_diff1
value: 38.059077642105095
- type: nauc_recall_at_1_max
value: 17.258898457644563
- type: nauc_recall_at_1_std
value: -15.1141417910556
- type: nauc_recall_at_20_diff1
value: 17.87014099435801
- type: nauc_recall_at_20_max
value: 13.410148544576403
- type: nauc_recall_at_20_std
value: -6.139892629545985
- type: nauc_recall_at_3_diff1
value: 27.941355405054267
- type: nauc_recall_at_3_max
value: 15.300277815129304
- type: nauc_recall_at_3_std
value: -10.440312722587832
- type: nauc_recall_at_5_diff1
value: 23.715987229368274
- type: nauc_recall_at_5_max
value: 15.063760707410282
- type: nauc_recall_at_5_std
value: -10.521011536014003
- type: ndcg_at_1
value: 10.056
- type: ndcg_at_10
value: 15.889000000000001
- type: ndcg_at_100
value: 20.007
- type: ndcg_at_1000
value: 23.324
- type: ndcg_at_20
value: 17.127
- type: ndcg_at_3
value: 13.171
- type: ndcg_at_5
value: 14.358
- type: precision_at_1
value: 10.056
- type: precision_at_10
value: 2.588
- type: precision_at_100
value: 0.49300000000000005
- type: precision_at_1000
value: 0.083
- type: precision_at_20
value: 1.559
- type: precision_at_3
value: 5.612
- type: precision_at_5
value: 4.0680000000000005
- type: recall_at_1
value: 9.472999999999999
- type: recall_at_10
value: 22.676
- type: recall_at_100
value: 42.672
- type: recall_at_1000
value: 68.939
- type: recall_at_20
value: 27.462999999999997
- type: recall_at_3
value: 15.383
- type: recall_at_5
value: 18.174
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval (default)
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: main_score
value: 11.0
- type: map_at_1
value: 5.148
- type: map_at_10
value: 8.469999999999999
- type: map_at_100
value: 9.212
- type: map_at_1000
value: 9.322
- type: map_at_20
value: 8.808
- type: map_at_3
value: 7.131
- type: map_at_5
value: 7.815999999999999
- type: mrr_at_1
value: 6.343283582089552
- type: mrr_at_10
value: 10.370913290689412
- type: mrr_at_100
value: 11.152489765865017
- type: mrr_at_1000
value: 11.240647288895591
- type: mrr_at_20
value: 10.741514212977526
- type: mrr_at_3
value: 8.872305140961858
- type: mrr_at_5
value: 9.631011608623549
- type: nauc_map_at_1000_diff1
value: 23.766626012326586
- type: nauc_map_at_1000_max
value: 12.653376257429583
- type: nauc_map_at_1000_std
value: 8.616529960924888
- type: nauc_map_at_100_diff1
value: 23.738827084996768
- type: nauc_map_at_100_max
value: 12.649650411660854
- type: nauc_map_at_100_std
value: 8.541383664809612
- type: nauc_map_at_10_diff1
value: 23.999578907568026
- type: nauc_map_at_10_max
value: 12.71263636252209
- type: nauc_map_at_10_std
value: 7.591195966672301
- type: nauc_map_at_1_diff1
value: 35.57446018071185
- type: nauc_map_at_1_max
value: 14.079653770667337
- type: nauc_map_at_1_std
value: 11.69336879118923
- type: nauc_map_at_20_diff1
value: 24.160966681198037
- type: nauc_map_at_20_max
value: 12.874042661878926
- type: nauc_map_at_20_std
value: 8.47225999927236
- type: nauc_map_at_3_diff1
value: 26.388037294578943
- type: nauc_map_at_3_max
value: 12.836707260430186
- type: nauc_map_at_3_std
value: 6.661759987628506
- type: nauc_map_at_5_diff1
value: 24.670961314269608
- type: nauc_map_at_5_max
value: 12.93683340709218
- type: nauc_map_at_5_std
value: 6.6199426801021435
- type: nauc_mrr_at_1000_diff1
value: 23.216930411387928
- type: nauc_mrr_at_1000_max
value: 15.19292342533299
- type: nauc_mrr_at_1000_std
value: 8.443837847880454
- type: nauc_mrr_at_100_diff1
value: 23.191640457286802
- type: nauc_mrr_at_100_max
value: 15.176060930237956
- type: nauc_mrr_at_100_std
value: 8.438353759551372
- type: nauc_mrr_at_10_diff1
value: 23.641665699722576
- type: nauc_mrr_at_10_max
value: 15.363771027025361
- type: nauc_mrr_at_10_std
value: 7.6943977364817675
- type: nauc_mrr_at_1_diff1
value: 34.13967231695169
- type: nauc_mrr_at_1_max
value: 18.217995055452356
- type: nauc_mrr_at_1_std
value: 11.691078655411745
- type: nauc_mrr_at_20_diff1
value: 23.584124655747633
- type: nauc_mrr_at_20_max
value: 15.504561511128212
- type: nauc_mrr_at_20_std
value: 8.487309205927613
- type: nauc_mrr_at_3_diff1
value: 26.239880657367205
- type: nauc_mrr_at_3_max
value: 15.653548540177347
- type: nauc_mrr_at_3_std
value: 6.349852805707984
- type: nauc_mrr_at_5_diff1
value: 23.976240360223915
- type: nauc_mrr_at_5_max
value: 15.744338647107542
- type: nauc_mrr_at_5_std
value: 6.487124576469712
- type: nauc_ndcg_at_1000_diff1
value: 19.496197697682945
- type: nauc_ndcg_at_1000_max
value: 12.101852407794244
- type: nauc_ndcg_at_1000_std
value: 12.016860314478954
- type: nauc_ndcg_at_100_diff1
value: 18.9745151618046
- type: nauc_ndcg_at_100_max
value: 11.815079877327287
- type: nauc_ndcg_at_100_std
value: 10.61036714041141
- type: nauc_ndcg_at_10_diff1
value: 20.49507024120394
- type: nauc_ndcg_at_10_max
value: 13.081108599437465
- type: nauc_ndcg_at_10_std
value: 7.930411944011889
- type: nauc_ndcg_at_1_diff1
value: 34.13967231695169
- type: nauc_ndcg_at_1_max
value: 18.217995055452356
- type: nauc_ndcg_at_1_std
value: 11.691078655411745
- type: nauc_ndcg_at_20_diff1
value: 20.839258395401707
- type: nauc_ndcg_at_20_max
value: 13.485012044482616
- type: nauc_ndcg_at_20_std
value: 10.423314754071841
- type: nauc_ndcg_at_3_diff1
value: 24.534248413854158
- type: nauc_ndcg_at_3_max
value: 13.612373481617901
- type: nauc_ndcg_at_3_std
value: 5.122655306518725
- type: nauc_ndcg_at_5_diff1
value: 21.45736115604528
- type: nauc_ndcg_at_5_max
value: 13.50049057414957
- type: nauc_ndcg_at_5_std
value: 5.5599020003710375
- type: nauc_precision_at_1000_diff1
value: 5.214729837045339
- type: nauc_precision_at_1000_max
value: 7.049726610933547
- type: nauc_precision_at_1000_std
value: 10.217710184510343
- type: nauc_precision_at_100_diff1
value: 10.428281377918521
- type: nauc_precision_at_100_max
value: 9.592496174158226
- type: nauc_precision_at_100_std
value: 11.524579687966593
- type: nauc_precision_at_10_diff1
value: 13.144126104006663
- type: nauc_precision_at_10_max
value: 12.791519232802509
- type: nauc_precision_at_10_std
value: 7.117254065134753
- type: nauc_precision_at_1_diff1
value: 34.13967231695169
- type: nauc_precision_at_1_max
value: 18.217995055452356
- type: nauc_precision_at_1_std
value: 11.691078655411745
- type: nauc_precision_at_20_diff1
value: 14.534665391717477
- type: nauc_precision_at_20_max
value: 13.373720011165052
- type: nauc_precision_at_20_std
value: 12.735872233304013
- type: nauc_precision_at_3_diff1
value: 20.050332454808
- type: nauc_precision_at_3_max
value: 14.287141036751699
- type: nauc_precision_at_3_std
value: 2.1412848715847774
- type: nauc_precision_at_5_diff1
value: 16.547335020939435
- type: nauc_precision_at_5_max
value: 14.007790386514285
- type: nauc_precision_at_5_std
value: 2.0821824154130835
- type: nauc_recall_at_1000_diff1
value: 12.811540518810224
- type: nauc_recall_at_1000_max
value: 8.292364898702107
- type: nauc_recall_at_1000_std
value: 21.172583907189164
- type: nauc_recall_at_100_diff1
value: 10.763207100689536
- type: nauc_recall_at_100_max
value: 7.433707421662763
- type: nauc_recall_at_100_std
value: 13.860488374098953
- type: nauc_recall_at_10_diff1
value: 14.171919964914773
- type: nauc_recall_at_10_max
value: 12.3310517183378
- type: nauc_recall_at_10_std
value: 8.627373443421941
- type: nauc_recall_at_1_diff1
value: 35.57446018071185
- type: nauc_recall_at_1_max
value: 14.079653770667337
- type: nauc_recall_at_1_std
value: 11.69336879118923
- type: nauc_recall_at_20_diff1
value: 15.254229786832758
- type: nauc_recall_at_20_max
value: 12.944155764013084
- type: nauc_recall_at_20_std
value: 13.947428525952118
- type: nauc_recall_at_3_diff1
value: 19.723050472865584
- type: nauc_recall_at_3_max
value: 12.208432070640235
- type: nauc_recall_at_3_std
value: 3.2560341221626357
- type: nauc_recall_at_5_diff1
value: 14.200616898717133
- type: nauc_recall_at_5_max
value: 12.262563917077088
- type: nauc_recall_at_5_std
value: 4.115380825048154
- type: ndcg_at_1
value: 6.343
- type: ndcg_at_10
value: 11.0
- type: ndcg_at_100
value: 15.332
- type: ndcg_at_1000
value: 18.505
- type: ndcg_at_20
value: 12.280000000000001
- type: ndcg_at_3
value: 8.297
- type: ndcg_at_5
value: 9.482
- type: precision_at_1
value: 6.343
- type: precision_at_10
value: 2.251
- type: precision_at_100
value: 0.516
- type: precision_at_1000
value: 0.091
- type: precision_at_20
value: 1.437
- type: precision_at_3
value: 4.104
- type: precision_at_5
value: 3.234
- type: recall_at_1
value: 5.148
- type: recall_at_10
value: 16.955000000000002
- type: recall_at_100
value: 37.295
- type: recall_at_1000
value: 60.681
- type: recall_at_20
value: 21.847
- type: recall_at_3
value: 9.735000000000001
- type: recall_at_5
value: 12.595999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval (default)
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: main_score
value: 22.671
- type: map_at_1
value: 13.99
- type: map_at_10
value: 19.16
- type: map_at_100
value: 20.247999999999998
- type: map_at_1000
value: 20.392
- type: map_at_20
value: 19.741
- type: map_at_3
value: 17.527
- type: map_at_5
value: 18.431
- type: mrr_at_1
value: 17.035611164581326
- type: mrr_at_10
value: 22.920886994515485
- type: mrr_at_100
value: 23.890327247971815
- type: mrr_at_1000
value: 23.98416758924587
- type: mrr_at_20
value: 23.478953217825296
- type: mrr_at_3
value: 21.158164902149515
- type: mrr_at_5
value: 22.154315046519095
- type: nauc_map_at_1000_diff1
value: 40.20942586785694
- type: nauc_map_at_1000_max
value: 19.62019855432636
- type: nauc_map_at_1000_std
value: -6.491186533676609
- type: nauc_map_at_100_diff1
value: 40.20129829669095
- type: nauc_map_at_100_max
value: 19.550525879706164
- type: nauc_map_at_100_std
value: -6.557075399749154
- type: nauc_map_at_10_diff1
value: 40.467281905527244
- type: nauc_map_at_10_max
value: 19.43593214249552
- type: nauc_map_at_10_std
value: -7.194947764095804
- type: nauc_map_at_1_diff1
value: 49.99688096548819
- type: nauc_map_at_1_max
value: 22.94216810488251
- type: nauc_map_at_1_std
value: -8.778905956805103
- type: nauc_map_at_20_diff1
value: 40.23228770570461
- type: nauc_map_at_20_max
value: 19.53074463716011
- type: nauc_map_at_20_std
value: -6.93310286275384
- type: nauc_map_at_3_diff1
value: 42.462368040248364
- type: nauc_map_at_3_max
value: 20.15932725435944
- type: nauc_map_at_3_std
value: -7.524246324724258
- type: nauc_map_at_5_diff1
value: 40.874264936734775
- type: nauc_map_at_5_max
value: 19.741200249921643
- type: nauc_map_at_5_std
value: -7.301832585861893
- type: nauc_mrr_at_1000_diff1
value: 36.93104632204301
- type: nauc_mrr_at_1000_max
value: 22.851961632870285
- type: nauc_mrr_at_1000_std
value: -6.050824088401521
- type: nauc_mrr_at_100_diff1
value: 36.90287005748533
- type: nauc_mrr_at_100_max
value: 22.838209556819866
- type: nauc_mrr_at_100_std
value: -6.064342814003103
- type: nauc_mrr_at_10_diff1
value: 36.93428786395009
- type: nauc_mrr_at_10_max
value: 22.89500409199853
- type: nauc_mrr_at_10_std
value: -6.581360935957288
- type: nauc_mrr_at_1_diff1
value: 46.11618926628157
- type: nauc_mrr_at_1_max
value: 27.154042077346617
- type: nauc_mrr_at_1_std
value: -7.408231463170914
- type: nauc_mrr_at_20_diff1
value: 36.964474819881275
- type: nauc_mrr_at_20_max
value: 22.9072805988528
- type: nauc_mrr_at_20_std
value: -6.306124053032698
- type: nauc_mrr_at_3_diff1
value: 38.9506895551962
- type: nauc_mrr_at_3_max
value: 24.218011709989156
- type: nauc_mrr_at_3_std
value: -6.7973818662665995
- type: nauc_mrr_at_5_diff1
value: 37.42273475691658
- type: nauc_mrr_at_5_max
value: 23.270403975249025
- type: nauc_mrr_at_5_std
value: -6.745230968723559
- type: nauc_ndcg_at_1000_diff1
value: 35.79628671266452
- type: nauc_ndcg_at_1000_max
value: 19.26627785321929
- type: nauc_ndcg_at_1000_std
value: -2.569388520550047
- type: nauc_ndcg_at_100_diff1
value: 35.768798848849585
- type: nauc_ndcg_at_100_max
value: 18.377203611905518
- type: nauc_ndcg_at_100_std
value: -3.3799540521604636
- type: nauc_ndcg_at_10_diff1
value: 36.510770710845314
- type: nauc_ndcg_at_10_max
value: 18.461708026439457
- type: nauc_ndcg_at_10_std
value: -6.491226580238661
- type: nauc_ndcg_at_1_diff1
value: 46.11618926628157
- type: nauc_ndcg_at_1_max
value: 27.154042077346617
- type: nauc_ndcg_at_1_std
value: -7.408231463170914
- type: nauc_ndcg_at_20_diff1
value: 36.070548441535124
- type: nauc_ndcg_at_20_max
value: 18.42396263230167
- type: nauc_ndcg_at_20_std
value: -5.61879907431204
- type: nauc_ndcg_at_3_diff1
value: 39.41782933627965
- type: nauc_ndcg_at_3_max
value: 21.047162846620946
- type: nauc_ndcg_at_3_std
value: -6.840755018811107
- type: nauc_ndcg_at_5_diff1
value: 37.17959347569529
- type: nauc_ndcg_at_5_max
value: 19.680732729842823
- type: nauc_ndcg_at_5_std
value: -6.707637987639474
- type: nauc_precision_at_1000_diff1
value: 0.49247246717968796
- type: nauc_precision_at_1000_max
value: 14.62495465729825
- type: nauc_precision_at_1000_std
value: 9.669209534147573
- type: nauc_precision_at_100_diff1
value: 11.5414175528365
- type: nauc_precision_at_100_max
value: 18.504188333036936
- type: nauc_precision_at_100_std
value: 6.194157348432716
- type: nauc_precision_at_10_diff1
value: 23.453163613392075
- type: nauc_precision_at_10_max
value: 20.06043852181855
- type: nauc_precision_at_10_std
value: -3.1717316064536836
- type: nauc_precision_at_1_diff1
value: 46.11618926628157
- type: nauc_precision_at_1_max
value: 27.154042077346617
- type: nauc_precision_at_1_std
value: -7.408231463170914
- type: nauc_precision_at_20_diff1
value: 20.708737669355788
- type: nauc_precision_at_20_max
value: 20.584185448256555
- type: nauc_precision_at_20_std
value: -0.7112923884678451
- type: nauc_precision_at_3_diff1
value: 31.594155528934703
- type: nauc_precision_at_3_max
value: 21.789282355041912
- type: nauc_precision_at_3_std
value: -3.9339318840163666
- type: nauc_precision_at_5_diff1
value: 26.10899513884069
- type: nauc_precision_at_5_max
value: 21.193775642825518
- type: nauc_precision_at_5_std
value: -4.04371021464142
- type: nauc_recall_at_1000_diff1
value: 19.475747590569128
- type: nauc_recall_at_1000_max
value: 10.531569131631349
- type: nauc_recall_at_1000_std
value: 20.376238758750535
- type: nauc_recall_at_100_diff1
value: 24.539661771959622
- type: nauc_recall_at_100_max
value: 8.849671325401761
- type: nauc_recall_at_100_std
value: 8.155353459396068
- type: nauc_recall_at_10_diff1
value: 27.94562559317398
- type: nauc_recall_at_10_max
value: 12.341122611885497
- type: nauc_recall_at_10_std
value: -4.945672050235199
- type: nauc_recall_at_1_diff1
value: 49.99688096548819
- type: nauc_recall_at_1_max
value: 22.94216810488251
- type: nauc_recall_at_1_std
value: -8.778905956805103
- type: nauc_recall_at_20_diff1
value: 26.721295492823483
- type: nauc_recall_at_20_max
value: 11.354327070591353
- type: nauc_recall_at_20_std
value: -2.0775832506536145
- type: nauc_recall_at_3_diff1
value: 35.18424498331245
- type: nauc_recall_at_3_max
value: 16.737206820951112
- type: nauc_recall_at_3_std
value: -6.362047908804104
- type: nauc_recall_at_5_diff1
value: 30.146390141726233
- type: nauc_recall_at_5_max
value: 14.718619551703243
- type: nauc_recall_at_5_std
value: -5.7544278604675165
- type: ndcg_at_1
value: 17.036
- type: ndcg_at_10
value: 22.671
- type: ndcg_at_100
value: 28.105999999999998
- type: ndcg_at_1000
value: 31.432
- type: ndcg_at_20
value: 24.617
- type: ndcg_at_3
value: 19.787
- type: ndcg_at_5
value: 21.122
- type: precision_at_1
value: 17.036
- type: precision_at_10
value: 4.09
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.131
- type: precision_at_20
value: 2.6470000000000002
- type: precision_at_3
value: 9.208
- type: precision_at_5
value: 6.660000000000001
- type: recall_at_1
value: 13.99
- type: recall_at_10
value: 29.743000000000002
- type: recall_at_100
value: 53.735
- type: recall_at_1000
value: 76.785
- type: recall_at_20
value: 36.624
- type: recall_at_3
value: 21.583
- type: recall_at_5
value: 24.937
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval (default)
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: main_score
value: 16.306
- type: map_at_1
value: 8.802999999999999
- type: map_at_10
value: 13.148000000000001
- type: map_at_100
value: 13.971
- type: map_at_1000
value: 14.105
- type: map_at_20
value: 13.529
- type: map_at_3
value: 11.638
- type: map_at_5
value: 12.356
- type: mrr_at_1
value: 11.073059360730593
- type: mrr_at_10
value: 15.919583967529165
- type: mrr_at_100
value: 16.709279732986573
- type: mrr_at_1000
value: 16.815285605955996
- type: mrr_at_20
value: 16.30432527215681
- type: mrr_at_3
value: 14.23135464231354
- type: mrr_at_5
value: 15.041856925418564
- type: nauc_map_at_1000_diff1
value: 30.659955136068056
- type: nauc_map_at_1000_max
value: 18.44163576415389
- type: nauc_map_at_1000_std
value: -3.8367034295883577
- type: nauc_map_at_100_diff1
value: 30.67476361799846
- type: nauc_map_at_100_max
value: 18.428682857132582
- type: nauc_map_at_100_std
value: -3.8897179777637882
- type: nauc_map_at_10_diff1
value: 30.59247711976844
- type: nauc_map_at_10_max
value: 18.705778597272683
- type: nauc_map_at_10_std
value: -5.022221490794733
- type: nauc_map_at_1_diff1
value: 40.141433107510736
- type: nauc_map_at_1_max
value: 23.026643526851306
- type: nauc_map_at_1_std
value: -5.749563342494851
- type: nauc_map_at_20_diff1
value: 30.68509526178602
- type: nauc_map_at_20_max
value: 18.45627985639005
- type: nauc_map_at_20_std
value: -4.406952661617948
- type: nauc_map_at_3_diff1
value: 31.73558283054405
- type: nauc_map_at_3_max
value: 18.205161864303328
- type: nauc_map_at_3_std
value: -5.435667326361934
- type: nauc_map_at_5_diff1
value: 30.794538196458472
- type: nauc_map_at_5_max
value: 18.500170217691768
- type: nauc_map_at_5_std
value: -5.684418245921586
- type: nauc_mrr_at_1000_diff1
value: 29.43077651539303
- type: nauc_mrr_at_1000_max
value: 20.25130465933273
- type: nauc_mrr_at_1000_std
value: -4.403299701181712
- type: nauc_mrr_at_100_diff1
value: 29.42440095545253
- type: nauc_mrr_at_100_max
value: 20.262024168775454
- type: nauc_mrr_at_100_std
value: -4.46104833589502
- type: nauc_mrr_at_10_diff1
value: 29.557535725132624
- type: nauc_mrr_at_10_max
value: 20.517669578964018
- type: nauc_mrr_at_10_std
value: -4.768947635082991
- type: nauc_mrr_at_1_diff1
value: 37.4774948212758
- type: nauc_mrr_at_1_max
value: 23.439278749784055
- type: nauc_mrr_at_1_std
value: -5.157088191908156
- type: nauc_mrr_at_20_diff1
value: 29.48470932914118
- type: nauc_mrr_at_20_max
value: 20.278594953830762
- type: nauc_mrr_at_20_std
value: -4.705845733248912
- type: nauc_mrr_at_3_diff1
value: 30.77059795240642
- type: nauc_mrr_at_3_max
value: 20.391982151070895
- type: nauc_mrr_at_3_std
value: -5.0478682718453385
- type: nauc_mrr_at_5_diff1
value: 30.028856765971984
- type: nauc_mrr_at_5_max
value: 20.557553687197167
- type: nauc_mrr_at_5_std
value: -5.24319954121192
- type: nauc_ndcg_at_1000_diff1
value: 27.40711483349399
- type: nauc_ndcg_at_1000_max
value: 17.126369493537826
- type: nauc_ndcg_at_1000_std
value: 0.5342836524997823
- type: nauc_ndcg_at_100_diff1
value: 27.711441526870356
- type: nauc_ndcg_at_100_max
value: 17.276247470704032
- type: nauc_ndcg_at_100_std
value: -0.8750376980385484
- type: nauc_ndcg_at_10_diff1
value: 27.720574369240204
- type: nauc_ndcg_at_10_max
value: 18.456829787593097
- type: nauc_ndcg_at_10_std
value: -4.216473937357797
- type: nauc_ndcg_at_1_diff1
value: 37.4774948212758
- type: nauc_ndcg_at_1_max
value: 23.439278749784055
- type: nauc_ndcg_at_1_std
value: -5.157088191908156
- type: nauc_ndcg_at_20_diff1
value: 27.746972988773933
- type: nauc_ndcg_at_20_max
value: 17.52494953980253
- type: nauc_ndcg_at_20_std
value: -2.9781030890977322
- type: nauc_ndcg_at_3_diff1
value: 29.522350537696717
- type: nauc_ndcg_at_3_max
value: 18.011604144671008
- type: nauc_ndcg_at_3_std
value: -4.725546369301677
- type: nauc_ndcg_at_5_diff1
value: 28.15851614794711
- type: nauc_ndcg_at_5_max
value: 18.317965726201184
- type: nauc_ndcg_at_5_std
value: -5.54058686011457
- type: nauc_precision_at_1000_diff1
value: 4.343913518236252
- type: nauc_precision_at_1000_max
value: 7.949664745091711
- type: nauc_precision_at_1000_std
value: 2.986855849342956
- type: nauc_precision_at_100_diff1
value: 15.435700494268618
- type: nauc_precision_at_100_max
value: 15.530490741404742
- type: nauc_precision_at_100_std
value: 4.089210125048146
- type: nauc_precision_at_10_diff1
value: 19.57474708128042
- type: nauc_precision_at_10_max
value: 19.632161038711597
- type: nauc_precision_at_10_std
value: -1.7830580435403458
- type: nauc_precision_at_1_diff1
value: 37.4774948212758
- type: nauc_precision_at_1_max
value: 23.439278749784055
- type: nauc_precision_at_1_std
value: -5.157088191908156
- type: nauc_precision_at_20_diff1
value: 20.568797026407644
- type: nauc_precision_at_20_max
value: 17.15052399771233
- type: nauc_precision_at_20_std
value: 0.6381100303472123
- type: nauc_precision_at_3_diff1
value: 23.53527003948809
- type: nauc_precision_at_3_max
value: 18.260774860471376
- type: nauc_precision_at_3_std
value: -4.277699429606214
- type: nauc_precision_at_5_diff1
value: 20.957492799575085
- type: nauc_precision_at_5_max
value: 20.041536239699173
- type: nauc_precision_at_5_std
value: -5.250189398148323
- type: nauc_recall_at_1000_diff1
value: 19.56836100145482
- type: nauc_recall_at_1000_max
value: 7.776560050916105
- type: nauc_recall_at_1000_std
value: 20.13708584784103
- type: nauc_recall_at_100_diff1
value: 22.16510567224014
- type: nauc_recall_at_100_max
value: 11.397641876417932
- type: nauc_recall_at_100_std
value: 7.58221141431797
- type: nauc_recall_at_10_diff1
value: 21.305911125564595
- type: nauc_recall_at_10_max
value: 15.61442350884527
- type: nauc_recall_at_10_std
value: -2.264275057856056
- type: nauc_recall_at_1_diff1
value: 40.141433107510736
- type: nauc_recall_at_1_max
value: 23.026643526851306
- type: nauc_recall_at_1_std
value: -5.749563342494851
- type: nauc_recall_at_20_diff1
value: 21.33360178111777
- type: nauc_recall_at_20_max
value: 13.007427262980725
- type: nauc_recall_at_20_std
value: 0.8315450930852684
- type: nauc_recall_at_3_diff1
value: 24.26871252397936
- type: nauc_recall_at_3_max
value: 13.78009182310998
- type: nauc_recall_at_3_std
value: -4.427807391785745
- type: nauc_recall_at_5_diff1
value: 22.146386144738443
- type: nauc_recall_at_5_max
value: 14.558261310921718
- type: nauc_recall_at_5_std
value: -5.453171833787222
- type: ndcg_at_1
value: 11.073
- type: ndcg_at_10
value: 16.306
- type: ndcg_at_100
value: 20.605
- type: ndcg_at_1000
value: 24.321
- type: ndcg_at_20
value: 17.605999999999998
- type: ndcg_at_3
value: 13.242
- type: ndcg_at_5
value: 14.424000000000001
- type: precision_at_1
value: 11.073
- type: precision_at_10
value: 3.174
- type: precision_at_100
value: 0.632
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 1.981
- type: precision_at_3
value: 6.317
- type: precision_at_5
value: 4.658
- type: recall_at_1
value: 8.802999999999999
- type: recall_at_10
value: 23.294999999999998
- type: recall_at_100
value: 42.543
- type: recall_at_1000
value: 69.501
- type: recall_at_20
value: 27.788
- type: recall_at_3
value: 14.935
- type: recall_at_5
value: 17.862000000000002
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval (default)
type: CQADupstackRetrieval_is_a_combined_dataset
config: default
split: test
revision: CQADupstackRetrieval_is_a_combined_dataset
metrics:
- type: main_score
value: 19.211500000000004
- type: ndcg_at_10
value: 19.211500000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval (default)
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: main_score
value: 13.274
- type: map_at_1
value: 7.514
- type: map_at_10
value: 10.763
- type: map_at_100
value: 11.466
- type: map_at_1000
value: 11.565
- type: map_at_20
value: 11.153
- type: map_at_3
value: 9.489
- type: map_at_5
value: 10.05
- type: mrr_at_1
value: 9.049079754601227
- type: mrr_at_10
value: 12.66140812153082
- type: mrr_at_100
value: 13.34440731558096
- type: mrr_at_1000
value: 13.431250805407094
- type: mrr_at_20
value: 13.015821938908093
- type: mrr_at_3
value: 11.349693251533745
- type: mrr_at_5
value: 11.955521472392643
- type: nauc_map_at_1000_diff1
value: 22.974932209110474
- type: nauc_map_at_1000_max
value: 19.2179493418811
- type: nauc_map_at_1000_std
value: -4.027224925667458
- type: nauc_map_at_100_diff1
value: 23.00306330611636
- type: nauc_map_at_100_max
value: 19.279597737188887
- type: nauc_map_at_100_std
value: -4.054272921846715
- type: nauc_map_at_10_diff1
value: 23.185643422536508
- type: nauc_map_at_10_max
value: 19.620815876636478
- type: nauc_map_at_10_std
value: -4.67640325592363
- type: nauc_map_at_1_diff1
value: 29.800345069729406
- type: nauc_map_at_1_max
value: 23.87910907490326
- type: nauc_map_at_1_std
value: -6.320599828399073
- type: nauc_map_at_20_diff1
value: 23.142569498191413
- type: nauc_map_at_20_max
value: 19.48779289778967
- type: nauc_map_at_20_std
value: -4.111902735804231
- type: nauc_map_at_3_diff1
value: 25.743034910929975
- type: nauc_map_at_3_max
value: 20.90755349054651
- type: nauc_map_at_3_std
value: -5.380592645823912
- type: nauc_map_at_5_diff1
value: 23.42137416675548
- type: nauc_map_at_5_max
value: 19.329228837468158
- type: nauc_map_at_5_std
value: -5.563525004474619
- type: nauc_mrr_at_1000_diff1
value: 24.10086479687415
- type: nauc_mrr_at_1000_max
value: 20.398011792778824
- type: nauc_mrr_at_1000_std
value: -2.1446120511727957
- type: nauc_mrr_at_100_diff1
value: 24.115697677435794
- type: nauc_mrr_at_100_max
value: 20.458646264375886
- type: nauc_mrr_at_100_std
value: -2.151550159504517
- type: nauc_mrr_at_10_diff1
value: 24.293579862933555
- type: nauc_mrr_at_10_max
value: 20.839345603643498
- type: nauc_mrr_at_10_std
value: -2.480503488415708
- type: nauc_mrr_at_1_diff1
value: 31.141124432852486
- type: nauc_mrr_at_1_max
value: 25.3974393459875
- type: nauc_mrr_at_1_std
value: -4.603112328474119
- type: nauc_mrr_at_20_diff1
value: 24.199943135873237
- type: nauc_mrr_at_20_max
value: 20.685578492011537
- type: nauc_mrr_at_20_std
value: -2.216739386860867
- type: nauc_mrr_at_3_diff1
value: 27.18978712305054
- type: nauc_mrr_at_3_max
value: 21.95145492661433
- type: nauc_mrr_at_3_std
value: -3.3010871727045004
- type: nauc_mrr_at_5_diff1
value: 24.55785813047769
- type: nauc_mrr_at_5_max
value: 20.630334122680697
- type: nauc_mrr_at_5_std
value: -3.4751492733475713
- type: nauc_ndcg_at_1000_diff1
value: 18.214182224000904
- type: nauc_ndcg_at_1000_max
value: 15.022677670245125
- type: nauc_ndcg_at_1000_std
value: -1.2757783952996276
- type: nauc_ndcg_at_100_diff1
value: 19.45648169337917
- type: nauc_ndcg_at_100_max
value: 16.160731902664246
- type: nauc_ndcg_at_100_std
value: -1.2021617745185982
- type: nauc_ndcg_at_10_diff1
value: 20.78032928549088
- type: nauc_ndcg_at_10_max
value: 18.37701966895512
- type: nauc_ndcg_at_10_std
value: -2.859756963061105
- type: nauc_ndcg_at_1_diff1
value: 31.141124432852486
- type: nauc_ndcg_at_1_max
value: 25.3974393459875
- type: nauc_ndcg_at_1_std
value: -4.603112328474119
- type: nauc_ndcg_at_20_diff1
value: 20.568804870494365
- type: nauc_ndcg_at_20_max
value: 17.688797629532804
- type: nauc_ndcg_at_20_std
value: -1.601270033947706
- type: nauc_ndcg_at_3_diff1
value: 25.352168775398777
- type: nauc_ndcg_at_3_max
value: 20.42319619108203
- type: nauc_ndcg_at_3_std
value: -4.2521134409577845
- type: nauc_ndcg_at_5_diff1
value: 21.18713014585295
- type: nauc_ndcg_at_5_max
value: 17.939191093215953
- type: nauc_ndcg_at_5_std
value: -4.743032229404275
- type: nauc_precision_at_1000_diff1
value: 4.892829090188313
- type: nauc_precision_at_1000_max
value: 7.933069592889083
- type: nauc_precision_at_1000_std
value: 4.24278581923629
- type: nauc_precision_at_100_diff1
value: 13.066398116495034
- type: nauc_precision_at_100_max
value: 14.384247527346716
- type: nauc_precision_at_100_std
value: 6.056873634302884
- type: nauc_precision_at_10_diff1
value: 16.616656372852148
- type: nauc_precision_at_10_max
value: 18.665616620054436
- type: nauc_precision_at_10_std
value: 1.1124326621912484
- type: nauc_precision_at_1_diff1
value: 31.141124432852486
- type: nauc_precision_at_1_max
value: 25.3974393459875
- type: nauc_precision_at_1_std
value: -4.603112328474119
- type: nauc_precision_at_20_diff1
value: 17.294215780840165
- type: nauc_precision_at_20_max
value: 18.09538722850449
- type: nauc_precision_at_20_std
value: 5.524315844370954
- type: nauc_precision_at_3_diff1
value: 25.1866897673422
- type: nauc_precision_at_3_max
value: 19.72076391537079
- type: nauc_precision_at_3_std
value: -1.6649392928833502
- type: nauc_precision_at_5_diff1
value: 17.254095768389526
- type: nauc_precision_at_5_max
value: 16.94859363403111
- type: nauc_precision_at_5_std
value: -1.9187213027734356
- type: nauc_recall_at_1000_diff1
value: 2.1491291924120404
- type: nauc_recall_at_1000_max
value: -0.6564763388554173
- type: nauc_recall_at_1000_std
value: 2.480520716627822
- type: nauc_recall_at_100_diff1
value: 10.764856128055248
- type: nauc_recall_at_100_max
value: 6.734689971662489
- type: nauc_recall_at_100_std
value: 3.0407690200004334
- type: nauc_recall_at_10_diff1
value: 14.979718773625542
- type: nauc_recall_at_10_max
value: 14.109838347838258
- type: nauc_recall_at_10_std
value: -0.5378433013187329
- type: nauc_recall_at_1_diff1
value: 29.800345069729406
- type: nauc_recall_at_1_max
value: 23.87910907490326
- type: nauc_recall_at_1_std
value: -6.320599828399073
- type: nauc_recall_at_20_diff1
value: 14.511882633459333
- type: nauc_recall_at_20_max
value: 12.011480653201415
- type: nauc_recall_at_20_std
value: 2.0767690218465877
- type: nauc_recall_at_3_diff1
value: 20.6626126323687
- type: nauc_recall_at_3_max
value: 17.25857728630443
- type: nauc_recall_at_3_std
value: -3.7939883071411717
- type: nauc_recall_at_5_diff1
value: 14.1235036082108
- type: nauc_recall_at_5_max
value: 12.727411826064857
- type: nauc_recall_at_5_std
value: -4.60850604165874
- type: ndcg_at_1
value: 9.049
- type: ndcg_at_10
value: 13.274
- type: ndcg_at_100
value: 17.086000000000002
- type: ndcg_at_1000
value: 19.936999999999998
- type: ndcg_at_20
value: 14.582999999999998
- type: ndcg_at_3
value: 10.725999999999999
- type: ndcg_at_5
value: 11.623
- type: precision_at_1
value: 9.049
- type: precision_at_10
value: 2.423
- type: precision_at_100
value: 0.479
- type: precision_at_1000
value: 0.079
- type: precision_at_20
value: 1.526
- type: precision_at_3
value: 4.9590000000000005
- type: precision_at_5
value: 3.62
- type: recall_at_1
value: 7.514
- type: recall_at_10
value: 19.31
- type: recall_at_100
value: 37.413999999999994
- type: recall_at_1000
value: 59.021
- type: recall_at_20
value: 24.21
- type: recall_at_3
value: 12.113999999999999
- type: recall_at_5
value: 14.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval (default)
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: main_score
value: 10.994
- type: map_at_1
value: 6.225
- type: map_at_10
value: 8.953999999999999
- type: map_at_100
value: 9.603
- type: map_at_1000
value: 9.712
- type: map_at_20
value: 9.278
- type: map_at_3
value: 8.074
- type: map_at_5
value: 8.547
- type: mrr_at_1
value: 7.708189951823813
- type: mrr_at_10
value: 11.010238805317954
- type: mrr_at_100
value: 11.697852969394127
- type: mrr_at_1000
value: 11.788096222755389
- type: mrr_at_20
value: 11.36125747114887
- type: mrr_at_3
value: 9.967882541867406
- type: mrr_at_5
value: 10.53223216334021
- type: nauc_map_at_1000_diff1
value: 28.62895539988389
- type: nauc_map_at_1000_max
value: 16.242894414293037
- type: nauc_map_at_1000_std
value: -4.569604418870727
- type: nauc_map_at_100_diff1
value: 28.61807781605406
- type: nauc_map_at_100_max
value: 16.21900205663456
- type: nauc_map_at_100_std
value: -4.742228052779668
- type: nauc_map_at_10_diff1
value: 29.55698899178743
- type: nauc_map_at_10_max
value: 16.619065435982105
- type: nauc_map_at_10_std
value: -5.272914850396907
- type: nauc_map_at_1_diff1
value: 38.11099020611636
- type: nauc_map_at_1_max
value: 19.754663729177466
- type: nauc_map_at_1_std
value: -7.100435784719483
- type: nauc_map_at_20_diff1
value: 28.96213016918891
- type: nauc_map_at_20_max
value: 16.40536013245705
- type: nauc_map_at_20_std
value: -5.152060847207817
- type: nauc_map_at_3_diff1
value: 31.518330681088514
- type: nauc_map_at_3_max
value: 17.648594434363673
- type: nauc_map_at_3_std
value: -5.013522244046003
- type: nauc_map_at_5_diff1
value: 30.53555288667588
- type: nauc_map_at_5_max
value: 17.552873944829003
- type: nauc_map_at_5_std
value: -5.459559007946099
- type: nauc_mrr_at_1000_diff1
value: 28.56870451139856
- type: nauc_mrr_at_1000_max
value: 18.199477946334998
- type: nauc_mrr_at_1000_std
value: -3.83210753499382
- type: nauc_mrr_at_100_diff1
value: 28.55289316686771
- type: nauc_mrr_at_100_max
value: 18.190933266659705
- type: nauc_mrr_at_100_std
value: -3.910114024174217
- type: nauc_mrr_at_10_diff1
value: 29.44010525180224
- type: nauc_mrr_at_10_max
value: 18.5618742276953
- type: nauc_mrr_at_10_std
value: -4.318500155132472
- type: nauc_mrr_at_1_diff1
value: 37.756041398612425
- type: nauc_mrr_at_1_max
value: 22.180382124822522
- type: nauc_mrr_at_1_std
value: -6.881985725496932
- type: nauc_mrr_at_20_diff1
value: 28.862633708506863
- type: nauc_mrr_at_20_max
value: 18.368745544312883
- type: nauc_mrr_at_20_std
value: -4.231869471717514
- type: nauc_mrr_at_3_diff1
value: 31.67790485910417
- type: nauc_mrr_at_3_max
value: 20.067426011874694
- type: nauc_mrr_at_3_std
value: -4.35750935851484
- type: nauc_mrr_at_5_diff1
value: 30.3892346503623
- type: nauc_mrr_at_5_max
value: 19.427471974651258
- type: nauc_mrr_at_5_std
value: -4.501090877808792
- type: nauc_ndcg_at_1000_diff1
value: 23.124264919835152
- type: nauc_ndcg_at_1000_max
value: 13.725127541654583
- type: nauc_ndcg_at_1000_std
value: 0.8488267118015322
- type: nauc_ndcg_at_100_diff1
value: 22.931912676541813
- type: nauc_ndcg_at_100_max
value: 13.573133160305714
- type: nauc_ndcg_at_100_std
value: -1.9712575029716004
- type: nauc_ndcg_at_10_diff1
value: 26.49225179330549
- type: nauc_ndcg_at_10_max
value: 15.334589645844614
- type: nauc_ndcg_at_10_std
value: -4.732200420388755
- type: nauc_ndcg_at_1_diff1
value: 37.756041398612425
- type: nauc_ndcg_at_1_max
value: 22.180382124822522
- type: nauc_ndcg_at_1_std
value: -6.881985725496932
- type: nauc_ndcg_at_20_diff1
value: 24.758487984247115
- type: nauc_ndcg_at_20_max
value: 14.685319575357777
- type: nauc_ndcg_at_20_std
value: -4.432729957713687
- type: nauc_ndcg_at_3_diff1
value: 30.04172743163936
- type: nauc_ndcg_at_3_max
value: 17.942422342704166
- type: nauc_ndcg_at_3_std
value: -4.371869609553122
- type: nauc_ndcg_at_5_diff1
value: 28.394597447013364
- type: nauc_ndcg_at_5_max
value: 17.337563726465902
- type: nauc_ndcg_at_5_std
value: -4.979815289974346
- type: nauc_precision_at_1000_diff1
value: 13.358015963281982
- type: nauc_precision_at_1000_max
value: 13.588027398642533
- type: nauc_precision_at_1000_std
value: 16.038391304073617
- type: nauc_precision_at_100_diff1
value: 14.048154067920237
- type: nauc_precision_at_100_max
value: 13.442039272771812
- type: nauc_precision_at_100_std
value: 6.293550136432713
- type: nauc_precision_at_10_diff1
value: 19.7938197345429
- type: nauc_precision_at_10_max
value: 15.498999930693053
- type: nauc_precision_at_10_std
value: -2.820921985501471
- type: nauc_precision_at_1_diff1
value: 37.756041398612425
- type: nauc_precision_at_1_max
value: 22.180382124822522
- type: nauc_precision_at_1_std
value: -6.881985725496932
- type: nauc_precision_at_20_diff1
value: 16.86330177780297
- type: nauc_precision_at_20_max
value: 14.757498925286052
- type: nauc_precision_at_20_std
value: -1.4878113085077458
- type: nauc_precision_at_3_diff1
value: 26.22068335923554
- type: nauc_precision_at_3_max
value: 19.552244504819107
- type: nauc_precision_at_3_std
value: -2.903836612504541
- type: nauc_precision_at_5_diff1
value: 23.01543740291806
- type: nauc_precision_at_5_max
value: 18.976238791156298
- type: nauc_precision_at_5_std
value: -3.772870601995056
- type: nauc_recall_at_1000_diff1
value: 11.344856628291772
- type: nauc_recall_at_1000_max
value: 5.496064714954898
- type: nauc_recall_at_1000_std
value: 14.552915745152944
- type: nauc_recall_at_100_diff1
value: 11.37183345326816
- type: nauc_recall_at_100_max
value: 6.152609534633153
- type: nauc_recall_at_100_std
value: 3.3240506595168617
- type: nauc_recall_at_10_diff1
value: 19.414706457137537
- type: nauc_recall_at_10_max
value: 10.013408222848447
- type: nauc_recall_at_10_std
value: -4.469998335412016
- type: nauc_recall_at_1_diff1
value: 38.11099020611636
- type: nauc_recall_at_1_max
value: 19.754663729177466
- type: nauc_recall_at_1_std
value: -7.100435784719483
- type: nauc_recall_at_20_diff1
value: 15.570619584248163
- type: nauc_recall_at_20_max
value: 8.816676896160281
- type: nauc_recall_at_20_std
value: -3.7706693105174836
- type: nauc_recall_at_3_diff1
value: 25.664091285326485
- type: nauc_recall_at_3_max
value: 14.868700645447488
- type: nauc_recall_at_3_std
value: -3.5813114627791736
- type: nauc_recall_at_5_diff1
value: 22.650699032516435
- type: nauc_recall_at_5_max
value: 14.046776424466485
- type: nauc_recall_at_5_std
value: -5.072422590207594
- type: ndcg_at_1
value: 7.707999999999999
- type: ndcg_at_10
value: 10.994
- type: ndcg_at_100
value: 14.562
- type: ndcg_at_1000
value: 17.738
- type: ndcg_at_20
value: 12.152000000000001
- type: ndcg_at_3
value: 9.286999999999999
- type: ndcg_at_5
value: 10.057
- type: precision_at_1
value: 7.707999999999999
- type: precision_at_10
value: 2.068
- type: precision_at_100
value: 0.466
- type: precision_at_1000
value: 0.08800000000000001
- type: precision_at_20
value: 1.352
- type: precision_at_3
value: 4.508
- type: precision_at_5
value: 3.3169999999999997
- type: recall_at_1
value: 6.225
- type: recall_at_10
value: 15.177999999999999
- type: recall_at_100
value: 31.726
- type: recall_at_1000
value: 55.286
- type: recall_at_20
value: 19.516
- type: recall_at_3
value: 10.381
- type: recall_at_5
value: 12.354999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval (default)
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: main_score
value: 17.415
- type: map_at_1
value: 11.61
- type: map_at_10
value: 14.879000000000001
- type: map_at_100
value: 15.64
- type: map_at_1000
value: 15.744
- type: map_at_20
value: 15.222
- type: map_at_3
value: 13.818
- type: map_at_5
value: 14.221
- type: mrr_at_1
value: 14.085820895522389
- type: mrr_at_10
value: 17.784144752428336
- type: mrr_at_100
value: 18.59055632302295
- type: mrr_at_1000
value: 18.680733729013262
- type: mrr_at_20
value: 18.159102701666594
- type: mrr_at_3
value: 16.68221393034826
- type: mrr_at_5
value: 17.10665422885572
- type: nauc_map_at_1000_diff1
value: 39.56056915227938
- type: nauc_map_at_1000_max
value: 27.13397943596498
- type: nauc_map_at_1000_std
value: -7.0908382945611175
- type: nauc_map_at_100_diff1
value: 39.54030188989168
- type: nauc_map_at_100_max
value: 27.13281562979474
- type: nauc_map_at_100_std
value: -7.165159503138965
- type: nauc_map_at_10_diff1
value: 40.318171341397765
- type: nauc_map_at_10_max
value: 27.535451283580016
- type: nauc_map_at_10_std
value: -7.689737441073707
- type: nauc_map_at_1_diff1
value: 47.05601088674895
- type: nauc_map_at_1_max
value: 30.576608334052853
- type: nauc_map_at_1_std
value: -9.67702524348975
- type: nauc_map_at_20_diff1
value: 39.80136558735939
- type: nauc_map_at_20_max
value: 27.051853945437948
- type: nauc_map_at_20_std
value: -7.409144616339466
- type: nauc_map_at_3_diff1
value: 42.15633029927089
- type: nauc_map_at_3_max
value: 28.386143076096086
- type: nauc_map_at_3_std
value: -9.106105164113686
- type: nauc_map_at_5_diff1
value: 41.46860741828094
- type: nauc_map_at_5_max
value: 28.202178480215373
- type: nauc_map_at_5_std
value: -8.399626801433124
- type: nauc_mrr_at_1000_diff1
value: 37.78472411053756
- type: nauc_mrr_at_1000_max
value: 28.338277069066432
- type: nauc_mrr_at_1000_std
value: -7.391912169514899
- type: nauc_mrr_at_100_diff1
value: 37.74697100045658
- type: nauc_mrr_at_100_max
value: 28.35832528792151
- type: nauc_mrr_at_100_std
value: -7.4298805804754995
- type: nauc_mrr_at_10_diff1
value: 38.428674914285196
- type: nauc_mrr_at_10_max
value: 28.708508212507105
- type: nauc_mrr_at_10_std
value: -7.884064754659524
- type: nauc_mrr_at_1_diff1
value: 45.69997352898185
- type: nauc_mrr_at_1_max
value: 32.47880480030532
- type: nauc_mrr_at_1_std
value: -9.337266605729418
- type: nauc_mrr_at_20_diff1
value: 37.99989625388078
- type: nauc_mrr_at_20_max
value: 28.255616608253824
- type: nauc_mrr_at_20_std
value: -7.614369324242356
- type: nauc_mrr_at_3_diff1
value: 40.126736669268766
- type: nauc_mrr_at_3_max
value: 29.616770044400464
- type: nauc_mrr_at_3_std
value: -9.336882852739908
- type: nauc_mrr_at_5_diff1
value: 39.41517859913304
- type: nauc_mrr_at_5_max
value: 29.312224024493094
- type: nauc_mrr_at_5_std
value: -8.792379282413792
- type: nauc_ndcg_at_1000_diff1
value: 34.318717429678735
- type: nauc_ndcg_at_1000_max
value: 24.57185685965525
- type: nauc_ndcg_at_1000_std
value: -2.367526484055821
- type: nauc_ndcg_at_100_diff1
value: 33.59453283807552
- type: nauc_ndcg_at_100_max
value: 24.73858681825266
- type: nauc_ndcg_at_100_std
value: -4.087141295771279
- type: nauc_ndcg_at_10_diff1
value: 36.635105955522235
- type: nauc_ndcg_at_10_max
value: 25.975386842872318
- type: nauc_ndcg_at_10_std
value: -6.3751364798979315
- type: nauc_ndcg_at_1_diff1
value: 45.69997352898185
- type: nauc_ndcg_at_1_max
value: 32.47880480030532
- type: nauc_ndcg_at_1_std
value: -9.337266605729418
- type: nauc_ndcg_at_20_diff1
value: 35.16876791291799
- type: nauc_ndcg_at_20_max
value: 24.477658044207647
- type: nauc_ndcg_at_20_std
value: -5.555064208738701
- type: nauc_ndcg_at_3_diff1
value: 39.82534185570945
- type: nauc_ndcg_at_3_max
value: 28.139721552476963
- type: nauc_ndcg_at_3_std
value: -9.160710946542384
- type: nauc_ndcg_at_5_diff1
value: 38.98115351105197
- type: nauc_ndcg_at_5_max
value: 27.515452028134202
- type: nauc_ndcg_at_5_std
value: -8.025551102160557
- type: nauc_precision_at_1000_diff1
value: 12.303392079476001
- type: nauc_precision_at_1000_max
value: 15.521101561430214
- type: nauc_precision_at_1000_std
value: 13.875729823362349
- type: nauc_precision_at_100_diff1
value: 15.718813920537666
- type: nauc_precision_at_100_max
value: 20.036566730817615
- type: nauc_precision_at_100_std
value: 5.068608226979542
- type: nauc_precision_at_10_diff1
value: 25.3121404066982
- type: nauc_precision_at_10_max
value: 24.190797754465372
- type: nauc_precision_at_10_std
value: -3.28815407741081
- type: nauc_precision_at_1_diff1
value: 45.69997352898185
- type: nauc_precision_at_1_max
value: 32.47880480030532
- type: nauc_precision_at_1_std
value: -9.337266605729418
- type: nauc_precision_at_20_diff1
value: 21.370193752136633
- type: nauc_precision_at_20_max
value: 19.74829392747058
- type: nauc_precision_at_20_std
value: -1.1434647531180093
- type: nauc_precision_at_3_diff1
value: 33.27263719269652
- type: nauc_precision_at_3_max
value: 27.28958835327579
- type: nauc_precision_at_3_std
value: -9.03699952848916
- type: nauc_precision_at_5_diff1
value: 31.109130426292463
- type: nauc_precision_at_5_max
value: 26.959336149040137
- type: nauc_precision_at_5_std
value: -6.946474296738139
- type: nauc_recall_at_1000_diff1
value: 17.923508430691957
- type: nauc_recall_at_1000_max
value: 10.80984639138324
- type: nauc_recall_at_1000_std
value: 17.38699739341662
- type: nauc_recall_at_100_diff1
value: 17.188512794168755
- type: nauc_recall_at_100_max
value: 15.470956979815659
- type: nauc_recall_at_100_std
value: 4.263468796063786
- type: nauc_recall_at_10_diff1
value: 27.628371666732892
- type: nauc_recall_at_10_max
value: 19.847290125705662
- type: nauc_recall_at_10_std
value: -2.718782096589473
- type: nauc_recall_at_1_diff1
value: 47.05601088674895
- type: nauc_recall_at_1_max
value: 30.576608334052853
- type: nauc_recall_at_1_std
value: -9.67702524348975
- type: nauc_recall_at_20_diff1
value: 23.787114240920214
- type: nauc_recall_at_20_max
value: 15.65621275614017
- type: nauc_recall_at_20_std
value: -0.6996887505536454
- type: nauc_recall_at_3_diff1
value: 37.16605995449111
- type: nauc_recall_at_3_max
value: 24.971735910807293
- type: nauc_recall_at_3_std
value: -8.874845333377282
- type: nauc_recall_at_5_diff1
value: 34.15194539098878
- type: nauc_recall_at_5_max
value: 23.788685123818407
- type: nauc_recall_at_5_std
value: -6.520745742182325
- type: ndcg_at_1
value: 14.086000000000002
- type: ndcg_at_10
value: 17.415
- type: ndcg_at_100
value: 21.705
- type: ndcg_at_1000
value: 24.851
- type: ndcg_at_20
value: 18.674
- type: ndcg_at_3
value: 15.369
- type: ndcg_at_5
value: 15.903
- type: precision_at_1
value: 14.086000000000002
- type: precision_at_10
value: 2.9010000000000002
- type: precision_at_100
value: 0.567
- type: precision_at_1000
value: 0.093
- type: precision_at_20
value: 1.754
- type: precision_at_3
value: 6.903
- type: precision_at_5
value: 4.571
- type: recall_at_1
value: 11.61
- type: recall_at_10
value: 22.543
- type: recall_at_100
value: 42.586
- type: recall_at_1000
value: 66.3
- type: recall_at_20
value: 27.296
- type: recall_at_3
value: 16.458000000000002
- type: recall_at_5
value: 18.087
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval (default)
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: main_score
value: 21.398
- type: map_at_1
value: 12.418
- type: map_at_10
value: 17.634
- type: map_at_100
value: 18.427
- type: map_at_1000
value: 18.601
- type: map_at_20
value: 17.949
- type: map_at_3
value: 16.070999999999998
- type: map_at_5
value: 16.909
- type: mrr_at_1
value: 16.007905138339922
- type: mrr_at_10
value: 21.244275048622875
- type: mrr_at_100
value: 21.913675154893422
- type: mrr_at_1000
value: 22.00394675539023
- type: mrr_at_20
value: 21.484105638892164
- type: mrr_at_3
value: 19.729907773386028
- type: mrr_at_5
value: 20.579710144927535
- type: nauc_map_at_1000_diff1
value: 33.276058954347164
- type: nauc_map_at_1000_max
value: 22.686785676254438
- type: nauc_map_at_1000_std
value: -15.623983007245663
- type: nauc_map_at_100_diff1
value: 33.277163035857754
- type: nauc_map_at_100_max
value: 22.79483533389435
- type: nauc_map_at_100_std
value: -15.806523169464585
- type: nauc_map_at_10_diff1
value: 33.31349011893446
- type: nauc_map_at_10_max
value: 23.16070733276047
- type: nauc_map_at_10_std
value: -16.557456309767332
- type: nauc_map_at_1_diff1
value: 43.560854870215444
- type: nauc_map_at_1_max
value: 22.785972852704127
- type: nauc_map_at_1_std
value: -17.629946377144794
- type: nauc_map_at_20_diff1
value: 33.570999449061176
- type: nauc_map_at_20_max
value: 22.993901876226587
- type: nauc_map_at_20_std
value: -16.272939675166977
- type: nauc_map_at_3_diff1
value: 35.03763295449743
- type: nauc_map_at_3_max
value: 22.445582103531297
- type: nauc_map_at_3_std
value: -16.560038144492275
- type: nauc_map_at_5_diff1
value: 34.27964006257987
- type: nauc_map_at_5_max
value: 23.332248714244795
- type: nauc_map_at_5_std
value: -16.57243447707981
- type: nauc_mrr_at_1000_diff1
value: 32.944240054080296
- type: nauc_mrr_at_1000_max
value: 21.812793329305745
- type: nauc_mrr_at_1000_std
value: -13.642145832181225
- type: nauc_mrr_at_100_diff1
value: 32.92776460042595
- type: nauc_mrr_at_100_max
value: 21.791203022888052
- type: nauc_mrr_at_100_std
value: -13.640560468524749
- type: nauc_mrr_at_10_diff1
value: 32.9752685024834
- type: nauc_mrr_at_10_max
value: 22.104988021339146
- type: nauc_mrr_at_10_std
value: -14.271356854639786
- type: nauc_mrr_at_1_diff1
value: 42.51316330983356
- type: nauc_mrr_at_1_max
value: 23.297138888078976
- type: nauc_mrr_at_1_std
value: -14.903606813837882
- type: nauc_mrr_at_20_diff1
value: 33.22223363073958
- type: nauc_mrr_at_20_max
value: 21.974295331873055
- type: nauc_mrr_at_20_std
value: -13.88205443342369
- type: nauc_mrr_at_3_diff1
value: 33.993832814261395
- type: nauc_mrr_at_3_max
value: 21.556945052605887
- type: nauc_mrr_at_3_std
value: -13.797171517214505
- type: nauc_mrr_at_5_diff1
value: 33.35409476101201
- type: nauc_mrr_at_5_max
value: 21.981426511175837
- type: nauc_mrr_at_5_std
value: -14.09531063812787
- type: nauc_ndcg_at_1000_diff1
value: 29.438860831545004
- type: nauc_ndcg_at_1000_max
value: 21.25973393436945
- type: nauc_ndcg_at_1000_std
value: -11.16393916502227
- type: nauc_ndcg_at_100_diff1
value: 28.444184419510172
- type: nauc_ndcg_at_100_max
value: 21.18616561891909
- type: nauc_ndcg_at_100_std
value: -12.037980607459001
- type: nauc_ndcg_at_10_diff1
value: 29.271087139678205
- type: nauc_ndcg_at_10_max
value: 22.032768110468098
- type: nauc_ndcg_at_10_std
value: -15.467782849927971
- type: nauc_ndcg_at_1_diff1
value: 42.51316330983356
- type: nauc_ndcg_at_1_max
value: 23.297138888078976
- type: nauc_ndcg_at_1_std
value: -14.903606813837882
- type: nauc_ndcg_at_20_diff1
value: 30.46132048728029
- type: nauc_ndcg_at_20_max
value: 21.81477297472493
- type: nauc_ndcg_at_20_std
value: -14.218418166481491
- type: nauc_ndcg_at_3_diff1
value: 32.0153358591922
- type: nauc_ndcg_at_3_max
value: 20.770546204709458
- type: nauc_ndcg_at_3_std
value: -14.747432002736549
- type: nauc_ndcg_at_5_diff1
value: 30.981699893250898
- type: nauc_ndcg_at_5_max
value: 22.090548813686304
- type: nauc_ndcg_at_5_std
value: -15.09612387707668
- type: nauc_precision_at_1000_diff1
value: 7.2014592078746125
- type: nauc_precision_at_1000_max
value: -5.678465880888778
- type: nauc_precision_at_1000_std
value: 22.430084503019
- type: nauc_precision_at_100_diff1
value: 7.47376139946301
- type: nauc_precision_at_100_max
value: 2.300260757829557
- type: nauc_precision_at_100_std
value: 13.810673946221709
- type: nauc_precision_at_10_diff1
value: 15.542740121996912
- type: nauc_precision_at_10_max
value: 15.807667200751279
- type: nauc_precision_at_10_std
value: -9.58878382311598
- type: nauc_precision_at_1_diff1
value: 42.51316330983356
- type: nauc_precision_at_1_max
value: 23.297138888078976
- type: nauc_precision_at_1_std
value: -14.903606813837882
- type: nauc_precision_at_20_diff1
value: 17.44141625096109
- type: nauc_precision_at_20_max
value: 12.987380515646793
- type: nauc_precision_at_20_std
value: -3.3241327401895018
- type: nauc_precision_at_3_diff1
value: 24.31306633873876
- type: nauc_precision_at_3_max
value: 20.59991114197874
- type: nauc_precision_at_3_std
value: -12.702555430555881
- type: nauc_precision_at_5_diff1
value: 21.113937977245538
- type: nauc_precision_at_5_max
value: 19.40330569402618
- type: nauc_precision_at_5_std
value: -11.001297546039366
- type: nauc_recall_at_1000_diff1
value: 14.316639289353503
- type: nauc_recall_at_1000_max
value: 14.663280590084184
- type: nauc_recall_at_1000_std
value: 10.373834237194783
- type: nauc_recall_at_100_diff1
value: 14.159748016577145
- type: nauc_recall_at_100_max
value: 15.266942159548291
- type: nauc_recall_at_100_std
value: 0.09898266158022606
- type: nauc_recall_at_10_diff1
value: 19.311511962157848
- type: nauc_recall_at_10_max
value: 21.086642659351444
- type: nauc_recall_at_10_std
value: -15.03280805118371
- type: nauc_recall_at_1_diff1
value: 43.560854870215444
- type: nauc_recall_at_1_max
value: 22.785972852704127
- type: nauc_recall_at_1_std
value: -17.629946377144794
- type: nauc_recall_at_20_diff1
value: 22.84188696362324
- type: nauc_recall_at_20_max
value: 19.255833980651115
- type: nauc_recall_at_20_std
value: -10.769401250685878
- type: nauc_recall_at_3_diff1
value: 25.289776971942963
- type: nauc_recall_at_3_max
value: 19.495340268606647
- type: nauc_recall_at_3_std
value: -14.682485696338162
- type: nauc_recall_at_5_diff1
value: 23.28267489764339
- type: nauc_recall_at_5_max
value: 21.90368937976734
- type: nauc_recall_at_5_std
value: -15.19826645274188
- type: ndcg_at_1
value: 16.008
- type: ndcg_at_10
value: 21.398
- type: ndcg_at_100
value: 25.241999999999997
- type: ndcg_at_1000
value: 28.833
- type: ndcg_at_20
value: 22.234
- type: ndcg_at_3
value: 18.86
- type: ndcg_at_5
value: 20.037
- type: precision_at_1
value: 16.008
- type: precision_at_10
value: 4.328
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_20
value: 2.579
- type: precision_at_3
value: 9.157
- type: precision_at_5
value: 6.837999999999999
- type: recall_at_1
value: 12.418
- type: recall_at_10
value: 27.935
- type: recall_at_100
value: 47.525
- type: recall_at_1000
value: 72.146
- type: recall_at_20
value: 31.861
- type: recall_at_3
value: 20.148
- type: recall_at_5
value: 23.296
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval (default)
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 13.536999999999999
- type: map_at_1
value: 7.468
- type: map_at_10
value: 10.972999999999999
- type: map_at_100
value: 11.744
- type: map_at_1000
value: 11.854000000000001
- type: map_at_20
value: 11.336
- type: map_at_3
value: 9.618
- type: map_at_5
value: 10.205
- type: mrr_at_1
value: 8.317929759704251
- type: mrr_at_10
value: 12.179752369216331
- type: mrr_at_100
value: 12.980085498763907
- type: mrr_at_1000
value: 13.075701345231755
- type: mrr_at_20
value: 12.550195110376356
- type: mrr_at_3
value: 10.659272951324708
- type: mrr_at_5
value: 11.30622304374615
- type: nauc_map_at_1000_diff1
value: 25.499689183541758
- type: nauc_map_at_1000_max
value: 26.492088085006486
- type: nauc_map_at_1000_std
value: -10.29049248054652
- type: nauc_map_at_100_diff1
value: 25.573124155292685
- type: nauc_map_at_100_max
value: 26.56159077339433
- type: nauc_map_at_100_std
value: -10.400824123310946
- type: nauc_map_at_10_diff1
value: 25.485224554587006
- type: nauc_map_at_10_max
value: 26.83491339438951
- type: nauc_map_at_10_std
value: -11.212653836584204
- type: nauc_map_at_1_diff1
value: 33.63991109177576
- type: nauc_map_at_1_max
value: 34.23354700535017
- type: nauc_map_at_1_std
value: -13.602316051776613
- type: nauc_map_at_20_diff1
value: 25.401091624302076
- type: nauc_map_at_20_max
value: 26.619190203647534
- type: nauc_map_at_20_std
value: -10.956292541627727
- type: nauc_map_at_3_diff1
value: 26.825203283397762
- type: nauc_map_at_3_max
value: 27.86659163589406
- type: nauc_map_at_3_std
value: -11.12760272108276
- type: nauc_map_at_5_diff1
value: 25.95917424438333
- type: nauc_map_at_5_max
value: 26.96719585977185
- type: nauc_map_at_5_std
value: -12.304191598798255
- type: nauc_mrr_at_1000_diff1
value: 26.058089211778814
- type: nauc_mrr_at_1000_max
value: 25.715522107102462
- type: nauc_mrr_at_1000_std
value: -9.26865979619022
- type: nauc_mrr_at_100_diff1
value: 26.098211857983944
- type: nauc_mrr_at_100_max
value: 25.751358106929445
- type: nauc_mrr_at_100_std
value: -9.348646640329418
- type: nauc_mrr_at_10_diff1
value: 26.245525532384857
- type: nauc_mrr_at_10_max
value: 25.751651308654733
- type: nauc_mrr_at_10_std
value: -10.162612510927444
- type: nauc_mrr_at_1_diff1
value: 33.74283305857714
- type: nauc_mrr_at_1_max
value: 33.58837545702206
- type: nauc_mrr_at_1_std
value: -11.623065310526266
- type: nauc_mrr_at_20_diff1
value: 25.889783688319756
- type: nauc_mrr_at_20_max
value: 25.752118615901914
- type: nauc_mrr_at_20_std
value: -9.822357050457521
- type: nauc_mrr_at_3_diff1
value: 27.564445527656073
- type: nauc_mrr_at_3_max
value: 27.360005995543013
- type: nauc_mrr_at_3_std
value: -9.833890331593217
- type: nauc_mrr_at_5_diff1
value: 26.822524992606787
- type: nauc_mrr_at_5_max
value: 26.284478920424583
- type: nauc_mrr_at_5_std
value: -11.036920037435278
- type: nauc_ndcg_at_1000_diff1
value: 22.865864500824603
- type: nauc_ndcg_at_1000_max
value: 22.771334973757252
- type: nauc_ndcg_at_1000_std
value: -4.391248945624055
- type: nauc_ndcg_at_100_diff1
value: 24.137939988386144
- type: nauc_ndcg_at_100_max
value: 23.87513301750976
- type: nauc_ndcg_at_100_std
value: -6.566673889142541
- type: nauc_ndcg_at_10_diff1
value: 23.28670973899235
- type: nauc_ndcg_at_10_max
value: 24.466850763499494
- type: nauc_ndcg_at_10_std
value: -10.258177551014816
- type: nauc_ndcg_at_1_diff1
value: 33.74283305857714
- type: nauc_ndcg_at_1_max
value: 33.58837545702206
- type: nauc_ndcg_at_1_std
value: -11.623065310526266
- type: nauc_ndcg_at_20_diff1
value: 22.989442500386524
- type: nauc_ndcg_at_20_max
value: 24.104082915814125
- type: nauc_ndcg_at_20_std
value: -9.45785928337488
- type: nauc_ndcg_at_3_diff1
value: 25.178014460273445
- type: nauc_ndcg_at_3_max
value: 25.942767533173754
- type: nauc_ndcg_at_3_std
value: -9.91363038933204
- type: nauc_ndcg_at_5_diff1
value: 23.991757042799776
- type: nauc_ndcg_at_5_max
value: 24.67696954394957
- type: nauc_ndcg_at_5_std
value: -12.31985800626722
- type: nauc_precision_at_1000_diff1
value: 8.73756056198236
- type: nauc_precision_at_1000_max
value: -2.2039393198217896
- type: nauc_precision_at_1000_std
value: 11.030221537933079
- type: nauc_precision_at_100_diff1
value: 20.215172391403144
- type: nauc_precision_at_100_max
value: 17.018645260191438
- type: nauc_precision_at_100_std
value: 3.767328710045164
- type: nauc_precision_at_10_diff1
value: 17.587454651591
- type: nauc_precision_at_10_max
value: 18.519756223864587
- type: nauc_precision_at_10_std
value: -7.57980264597448
- type: nauc_precision_at_1_diff1
value: 33.74283305857714
- type: nauc_precision_at_1_max
value: 33.58837545702206
- type: nauc_precision_at_1_std
value: -11.623065310526266
- type: nauc_precision_at_20_diff1
value: 16.8264764027673
- type: nauc_precision_at_20_max
value: 17.684383034724306
- type: nauc_precision_at_20_std
value: -4.715192266545397
- type: nauc_precision_at_3_diff1
value: 21.074816828033445
- type: nauc_precision_at_3_max
value: 21.203608983260384
- type: nauc_precision_at_3_std
value: -7.0598567996303165
- type: nauc_precision_at_5_diff1
value: 19.232226617012476
- type: nauc_precision_at_5_max
value: 18.21464537199811
- type: nauc_precision_at_5_std
value: -11.192063817701081
- type: nauc_recall_at_1000_diff1
value: 13.682126336330219
- type: nauc_recall_at_1000_max
value: 11.290148994929623
- type: nauc_recall_at_1000_std
value: 15.234970859087472
- type: nauc_recall_at_100_diff1
value: 21.54257810474028
- type: nauc_recall_at_100_max
value: 18.319728481896473
- type: nauc_recall_at_100_std
value: 1.8896944275133083
- type: nauc_recall_at_10_diff1
value: 18.303586564099813
- type: nauc_recall_at_10_max
value: 20.31707691425135
- type: nauc_recall_at_10_std
value: -8.56717254223721
- type: nauc_recall_at_1_diff1
value: 33.63991109177576
- type: nauc_recall_at_1_max
value: 34.23354700535017
- type: nauc_recall_at_1_std
value: -13.602316051776613
- type: nauc_recall_at_20_diff1
value: 18.133732998590617
- type: nauc_recall_at_20_max
value: 19.491824859679376
- type: nauc_recall_at_20_std
value: -6.958404447908455
- type: nauc_recall_at_3_diff1
value: 20.923379689287973
- type: nauc_recall_at_3_max
value: 22.305262469725605
- type: nauc_recall_at_3_std
value: -9.33545310798814
- type: nauc_recall_at_5_diff1
value: 18.697534927162877
- type: nauc_recall_at_5_max
value: 19.872464448638226
- type: nauc_recall_at_5_std
value: -13.201942499761413
- type: ndcg_at_1
value: 8.318
- type: ndcg_at_10
value: 13.536999999999999
- type: ndcg_at_100
value: 17.814
- type: ndcg_at_1000
value: 21.037
- type: ndcg_at_20
value: 14.795
- type: ndcg_at_3
value: 10.584
- type: ndcg_at_5
value: 11.631
- type: precision_at_1
value: 8.318
- type: precision_at_10
value: 2.348
- type: precision_at_100
value: 0.488
- type: precision_at_1000
value: 0.084
- type: precision_at_20
value: 1.4789999999999999
- type: precision_at_3
value: 4.559
- type: precision_at_5
value: 3.327
- type: recall_at_1
value: 7.468
- type: recall_at_10
value: 20.508000000000003
- type: recall_at_100
value: 40.969
- type: recall_at_1000
value: 66.01
- type: recall_at_20
value: 25.151
- type: recall_at_3
value: 12.187000000000001
- type: recall_at_5
value: 14.868
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER (default)
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 14.015
- type: map_at_1
value: 5.794
- type: map_at_10
value: 9.467
- type: map_at_100
value: 10.583
- type: map_at_1000
value: 10.738
- type: map_at_20
value: 10.019
- type: map_at_3
value: 7.800999999999999
- type: map_at_5
value: 8.530999999999999
- type: mrr_at_1
value: 12.37785016286645
- type: mrr_at_10
value: 19.195232924874603
- type: mrr_at_100
value: 20.36171753911915
- type: mrr_at_1000
value: 20.43422170175313
- type: mrr_at_20
value: 19.925433949052078
- type: mrr_at_3
value: 16.612377850162883
- type: mrr_at_5
value: 17.928338762214977
- type: nauc_map_at_1000_diff1
value: 30.77100530113992
- type: nauc_map_at_1000_max
value: 3.930399825338355
- type: nauc_map_at_1000_std
value: 19.339256296860647
- type: nauc_map_at_100_diff1
value: 30.731834293026033
- type: nauc_map_at_100_max
value: 3.9391965871824577
- type: nauc_map_at_100_std
value: 18.994224188430934
- type: nauc_map_at_10_diff1
value: 30.52002817023447
- type: nauc_map_at_10_max
value: 4.047355652304053
- type: nauc_map_at_10_std
value: 16.271456948493867
- type: nauc_map_at_1_diff1
value: 40.78221783055125
- type: nauc_map_at_1_max
value: 6.03643489529247
- type: nauc_map_at_1_std
value: 10.164994264153364
- type: nauc_map_at_20_diff1
value: 30.667265850525062
- type: nauc_map_at_20_max
value: 3.808011497380771
- type: nauc_map_at_20_std
value: 17.64597024700993
- type: nauc_map_at_3_diff1
value: 32.9882945525325
- type: nauc_map_at_3_max
value: 4.81442279492956
- type: nauc_map_at_3_std
value: 11.72899701083213
- type: nauc_map_at_5_diff1
value: 31.319747944398486
- type: nauc_map_at_5_max
value: 4.789346536725522
- type: nauc_map_at_5_std
value: 13.280932876910251
- type: nauc_mrr_at_1000_diff1
value: 28.72974681423866
- type: nauc_mrr_at_1000_max
value: 5.334428633833756
- type: nauc_mrr_at_1000_std
value: 21.94603472046183
- type: nauc_mrr_at_100_diff1
value: 28.71022403484308
- type: nauc_mrr_at_100_max
value: 5.333420382518744
- type: nauc_mrr_at_100_std
value: 21.95720361127466
- type: nauc_mrr_at_10_diff1
value: 28.123142846152966
- type: nauc_mrr_at_10_max
value: 5.476579464822251
- type: nauc_mrr_at_10_std
value: 20.85306394069719
- type: nauc_mrr_at_1_diff1
value: 34.81794628491484
- type: nauc_mrr_at_1_max
value: 6.5806430588232905
- type: nauc_mrr_at_1_std
value: 14.459527094653325
- type: nauc_mrr_at_20_diff1
value: 28.439259242098213
- type: nauc_mrr_at_20_max
value: 5.357148444191085
- type: nauc_mrr_at_20_std
value: 21.61419717452997
- type: nauc_mrr_at_3_diff1
value: 29.687849776616204
- type: nauc_mrr_at_3_max
value: 5.740633779727121
- type: nauc_mrr_at_3_std
value: 17.8879483888456
- type: nauc_mrr_at_5_diff1
value: 28.47430129361797
- type: nauc_mrr_at_5_max
value: 5.630703322113187
- type: nauc_mrr_at_5_std
value: 19.229576158387964
- type: nauc_ndcg_at_1000_diff1
value: 29.601902706390376
- type: nauc_ndcg_at_1000_max
value: 2.953924251677932
- type: nauc_ndcg_at_1000_std
value: 33.43699716309924
- type: nauc_ndcg_at_100_diff1
value: 28.61050534370323
- type: nauc_ndcg_at_100_max
value: 3.4205261114094623
- type: nauc_ndcg_at_100_std
value: 29.71705615290654
- type: nauc_ndcg_at_10_diff1
value: 27.08320442286844
- type: nauc_ndcg_at_10_max
value: 3.7887194412304863
- type: nauc_ndcg_at_10_std
value: 21.676623605562256
- type: nauc_ndcg_at_1_diff1
value: 34.81794628491484
- type: nauc_ndcg_at_1_max
value: 6.5806430588232905
- type: nauc_ndcg_at_1_std
value: 14.459527094653325
- type: nauc_ndcg_at_20_diff1
value: 27.787198576453758
- type: nauc_ndcg_at_20_max
value: 3.1540397427527713
- type: nauc_ndcg_at_20_std
value: 24.886749384694483
- type: nauc_ndcg_at_3_diff1
value: 29.951818040541088
- type: nauc_ndcg_at_3_max
value: 5.01579970046346
- type: nauc_ndcg_at_3_std
value: 15.279492475081327
- type: nauc_ndcg_at_5_diff1
value: 28.06492691727927
- type: nauc_ndcg_at_5_max
value: 4.89933436886099
- type: nauc_ndcg_at_5_std
value: 16.918642834035854
- type: nauc_precision_at_1000_diff1
value: 15.771733257364474
- type: nauc_precision_at_1000_max
value: 1.823845951487625
- type: nauc_precision_at_1000_std
value: 49.1852294234272
- type: nauc_precision_at_100_diff1
value: 18.265609570523985
- type: nauc_precision_at_100_max
value: 4.2756221878446885
- type: nauc_precision_at_100_std
value: 44.777126764828196
- type: nauc_precision_at_10_diff1
value: 17.001368989158973
- type: nauc_precision_at_10_max
value: 3.567699919296151
- type: nauc_precision_at_10_std
value: 32.23622509514423
- type: nauc_precision_at_1_diff1
value: 34.81794628491484
- type: nauc_precision_at_1_max
value: 6.5806430588232905
- type: nauc_precision_at_1_std
value: 14.459527094653325
- type: nauc_precision_at_20_diff1
value: 17.635731357627552
- type: nauc_precision_at_20_max
value: 3.034597543962715
- type: nauc_precision_at_20_std
value: 37.444737258116376
- type: nauc_precision_at_3_diff1
value: 22.582871559622486
- type: nauc_precision_at_3_max
value: 6.018578205165446
- type: nauc_precision_at_3_std
value: 19.760719025296815
- type: nauc_precision_at_5_diff1
value: 18.665624106588705
- type: nauc_precision_at_5_max
value: 5.618829486159042
- type: nauc_precision_at_5_std
value: 24.487192977269594
- type: nauc_recall_at_1000_diff1
value: 26.313094272841823
- type: nauc_recall_at_1000_max
value: -3.0358409209748767
- type: nauc_recall_at_1000_std
value: 52.23483909347241
- type: nauc_recall_at_100_diff1
value: 22.619825448361848
- type: nauc_recall_at_100_max
value: -0.48782855898636057
- type: nauc_recall_at_100_std
value: 39.456946722540245
- type: nauc_recall_at_10_diff1
value: 21.248191636390427
- type: nauc_recall_at_10_max
value: 1.057162598023577
- type: nauc_recall_at_10_std
value: 26.28529915222162
- type: nauc_recall_at_1_diff1
value: 40.78221783055125
- type: nauc_recall_at_1_max
value: 6.03643489529247
- type: nauc_recall_at_1_std
value: 10.164994264153364
- type: nauc_recall_at_20_diff1
value: 22.329681015763143
- type: nauc_recall_at_20_max
value: -0.9021963926705002
- type: nauc_recall_at_20_std
value: 31.423263430139137
- type: nauc_recall_at_3_diff1
value: 27.367759082174025
- type: nauc_recall_at_3_max
value: 3.9289202004328527
- type: nauc_recall_at_3_std
value: 13.622863131134919
- type: nauc_recall_at_5_diff1
value: 22.76288213235621
- type: nauc_recall_at_5_max
value: 3.471221773429057
- type: nauc_recall_at_5_std
value: 17.585600220417064
- type: ndcg_at_1
value: 12.378
- type: ndcg_at_10
value: 14.015
- type: ndcg_at_100
value: 19.555
- type: ndcg_at_1000
value: 22.979
- type: ndcg_at_20
value: 16.019
- type: ndcg_at_3
value: 10.780000000000001
- type: ndcg_at_5
value: 11.773
- type: precision_at_1
value: 12.378
- type: precision_at_10
value: 4.567
- type: precision_at_100
value: 1.035
- type: precision_at_1000
value: 0.166
- type: precision_at_20
value: 3.114
- type: precision_at_3
value: 7.926
- type: precision_at_5
value: 6.215
- type: recall_at_1
value: 5.794
- type: recall_at_10
value: 17.407
- type: recall_at_100
value: 37.191
- type: recall_at_1000
value: 56.851
- type: recall_at_20
value: 23.165
- type: recall_at_3
value: 9.713
- type: recall_at_5
value: 12.415
- task:
type: Retrieval
dataset:
name: MTEB DBPedia (default)
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 19.899
- type: map_at_1
value: 3.465
- type: map_at_10
value: 7.794
- type: map_at_100
value: 10.933
- type: map_at_1000
value: 11.752
- type: map_at_20
value: 9.016
- type: map_at_3
value: 5.427
- type: map_at_5
value: 6.502
- type: mrr_at_1
value: 34.75
- type: mrr_at_10
value: 45.200793650793656
- type: mrr_at_100
value: 46.05239344037991
- type: mrr_at_1000
value: 46.0856684337964
- type: mrr_at_20
value: 45.710684362077565
- type: mrr_at_3
value: 42.208333333333336
- type: mrr_at_5
value: 43.808333333333344
- type: nauc_map_at_1000_diff1
value: 18.86972613270399
- type: nauc_map_at_1000_max
value: 20.274156189253244
- type: nauc_map_at_1000_std
value: 22.191040122589133
- type: nauc_map_at_100_diff1
value: 18.788504382797093
- type: nauc_map_at_100_max
value: 18.991259275904696
- type: nauc_map_at_100_std
value: 19.224470200905856
- type: nauc_map_at_10_diff1
value: 18.750083550817912
- type: nauc_map_at_10_max
value: 10.317804767409177
- type: nauc_map_at_10_std
value: 4.146780937716071
- type: nauc_map_at_1_diff1
value: 24.593368387483753
- type: nauc_map_at_1_max
value: 4.589639725353537
- type: nauc_map_at_1_std
value: -8.92237341364795
- type: nauc_map_at_20_diff1
value: 18.991788660584362
- type: nauc_map_at_20_max
value: 13.525701435829877
- type: nauc_map_at_20_std
value: 10.505788067068151
- type: nauc_map_at_3_diff1
value: 18.3208401615434
- type: nauc_map_at_3_max
value: 9.337037518676164
- type: nauc_map_at_3_std
value: -3.652233530159517
- type: nauc_map_at_5_diff1
value: 18.092639410476284
- type: nauc_map_at_5_max
value: 10.092917720641017
- type: nauc_map_at_5_std
value: 0.17001723577182712
- type: nauc_mrr_at_1000_diff1
value: 29.78358698105705
- type: nauc_mrr_at_1000_max
value: 28.715621788566008
- type: nauc_mrr_at_1000_std
value: 22.028656730472925
- type: nauc_mrr_at_100_diff1
value: 29.790252324106998
- type: nauc_mrr_at_100_max
value: 28.742783310038494
- type: nauc_mrr_at_100_std
value: 22.03968708083945
- type: nauc_mrr_at_10_diff1
value: 29.438930345540236
- type: nauc_mrr_at_10_max
value: 28.65369065827219
- type: nauc_mrr_at_10_std
value: 21.78750467411176
- type: nauc_mrr_at_1_diff1
value: 35.330827390243996
- type: nauc_mrr_at_1_max
value: 26.56882708002626
- type: nauc_mrr_at_1_std
value: 21.623824720391546
- type: nauc_mrr_at_20_diff1
value: 29.738885034343433
- type: nauc_mrr_at_20_max
value: 28.757633233697227
- type: nauc_mrr_at_20_std
value: 21.94206110931751
- type: nauc_mrr_at_3_diff1
value: 30.084883512926936
- type: nauc_mrr_at_3_max
value: 28.504733195949854
- type: nauc_mrr_at_3_std
value: 21.343105616755405
- type: nauc_mrr_at_5_diff1
value: 29.162370505723974
- type: nauc_mrr_at_5_max
value: 28.302134300102317
- type: nauc_mrr_at_5_std
value: 21.967069891186686
- type: nauc_ndcg_at_1000_diff1
value: 21.5599701482179
- type: nauc_ndcg_at_1000_max
value: 19.60442562497246
- type: nauc_ndcg_at_1000_std
value: 38.57803059971978
- type: nauc_ndcg_at_100_diff1
value: 20.869754081262034
- type: nauc_ndcg_at_100_max
value: 17.061854693160267
- type: nauc_ndcg_at_100_std
value: 28.495912815567348
- type: nauc_ndcg_at_10_diff1
value: 21.68424149188379
- type: nauc_ndcg_at_10_max
value: 17.7957499268384
- type: nauc_ndcg_at_10_std
value: 20.329697185043177
- type: nauc_ndcg_at_1_diff1
value: 33.15797652004303
- type: nauc_ndcg_at_1_max
value: 19.169777835934728
- type: nauc_ndcg_at_1_std
value: 16.460300389696954
- type: nauc_ndcg_at_20_diff1
value: 20.980003079381408
- type: nauc_ndcg_at_20_max
value: 16.31240132872873
- type: nauc_ndcg_at_20_std
value: 21.336530494236147
- type: nauc_ndcg_at_3_diff1
value: 23.747010783899103
- type: nauc_ndcg_at_3_max
value: 20.514543159699503
- type: nauc_ndcg_at_3_std
value: 19.913679184651535
- type: nauc_ndcg_at_5_diff1
value: 21.811506356457578
- type: nauc_ndcg_at_5_max
value: 19.600228375339086
- type: nauc_ndcg_at_5_std
value: 20.80223119600392
- type: nauc_precision_at_1000_diff1
value: 7.616167380395875
- type: nauc_precision_at_1000_max
value: 24.36987688613695
- type: nauc_precision_at_1000_std
value: 28.517709442088883
- type: nauc_precision_at_100_diff1
value: 10.899372478558005
- type: nauc_precision_at_100_max
value: 32.52543047557354
- type: nauc_precision_at_100_std
value: 40.418143841067725
- type: nauc_precision_at_10_diff1
value: 12.454659530883022
- type: nauc_precision_at_10_max
value: 26.633347275996822
- type: nauc_precision_at_10_std
value: 31.766535462628333
- type: nauc_precision_at_1_diff1
value: 35.330827390243996
- type: nauc_precision_at_1_max
value: 26.56882708002626
- type: nauc_precision_at_1_std
value: 21.623824720391546
- type: nauc_precision_at_20_diff1
value: 13.710148345557894
- type: nauc_precision_at_20_max
value: 30.06641352798287
- type: nauc_precision_at_20_std
value: 37.51642649937503
- type: nauc_precision_at_3_diff1
value: 19.379905126167277
- type: nauc_precision_at_3_max
value: 29.474064921517996
- type: nauc_precision_at_3_std
value: 24.324769024438673
- type: nauc_precision_at_5_diff1
value: 14.983583546795229
- type: nauc_precision_at_5_max
value: 29.377923800204137
- type: nauc_precision_at_5_std
value: 28.792665620205433
- type: nauc_recall_at_1000_diff1
value: 9.420323994147108
- type: nauc_recall_at_1000_max
value: 1.716458858147155
- type: nauc_recall_at_1000_std
value: 42.675208969537806
- type: nauc_recall_at_100_diff1
value: 10.524089820623148
- type: nauc_recall_at_100_max
value: 4.847393922578022
- type: nauc_recall_at_100_std
value: 25.881256479477425
- type: nauc_recall_at_10_diff1
value: 10.405559854705523
- type: nauc_recall_at_10_max
value: -0.7229949712397538
- type: nauc_recall_at_10_std
value: 1.2453684953323285
- type: nauc_recall_at_1_diff1
value: 24.593368387483753
- type: nauc_recall_at_1_max
value: 4.589639725353537
- type: nauc_recall_at_1_std
value: -8.92237341364795
- type: nauc_recall_at_20_diff1
value: 9.153545675349667
- type: nauc_recall_at_20_max
value: 1.0523663509920702
- type: nauc_recall_at_20_std
value: 9.617722656364721
- type: nauc_recall_at_3_diff1
value: 11.453608857041628
- type: nauc_recall_at_3_max
value: 6.541125581241787
- type: nauc_recall_at_3_std
value: -6.374588849217941
- type: nauc_recall_at_5_diff1
value: 10.747977942968255
- type: nauc_recall_at_5_max
value: 3.2154611210290445
- type: nauc_recall_at_5_std
value: -1.2652013924076986
- type: ndcg_at_1
value: 24.25
- type: ndcg_at_10
value: 19.899
- type: ndcg_at_100
value: 23.204
- type: ndcg_at_1000
value: 29.658
- type: ndcg_at_20
value: 19.583000000000002
- type: ndcg_at_3
value: 21.335
- type: ndcg_at_5
value: 20.413999999999998
- type: precision_at_1
value: 34.75
- type: precision_at_10
value: 18.075
- type: precision_at_100
value: 5.897
- type: precision_at_1000
value: 1.22
- type: precision_at_20
value: 13.55
- type: precision_at_3
value: 26.833000000000002
- type: precision_at_5
value: 22.6
- type: recall_at_1
value: 3.465
- type: recall_at_10
value: 12.606
- type: recall_at_100
value: 29.843999999999998
- type: recall_at_1000
value: 52.242999999999995
- type: recall_at_20
value: 16.930999999999997
- type: recall_at_3
value: 6.425
- type: recall_at_5
value: 8.818
- task:
type: Classification
dataset:
name: MTEB EmotionClassification (default)
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 38.339999999999996
- type: f1
value: 34.598741976118816
- type: f1_weighted
value: 40.51989104726522
- type: main_score
value: 38.339999999999996
- task:
type: Retrieval
dataset:
name: MTEB FEVER (default)
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 25.006
- type: map_at_1
value: 13.943
- type: map_at_10
value: 20.706
- type: map_at_100
value: 21.740000000000002
- type: map_at_1000
value: 21.822
- type: map_at_20
value: 21.267
- type: map_at_3
value: 18.35
- type: map_at_5
value: 19.636
- type: mrr_at_1
value: 14.79147914791479
- type: mrr_at_10
value: 21.939967806304423
- type: mrr_at_100
value: 22.991772526136195
- type: mrr_at_1000
value: 23.068306121221312
- type: mrr_at_20
value: 22.521146379622163
- type: mrr_at_3
value: 19.484448444844478
- type: mrr_at_5
value: 20.817331733173358
- type: nauc_map_at_1000_diff1
value: 19.35822964414219
- type: nauc_map_at_1000_max
value: 8.897124191699918
- type: nauc_map_at_1000_std
value: -14.004128494439424
- type: nauc_map_at_100_diff1
value: 19.34567869663468
- type: nauc_map_at_100_max
value: 8.8745190516295
- type: nauc_map_at_100_std
value: -14.025946762212236
- type: nauc_map_at_10_diff1
value: 19.478894508723158
- type: nauc_map_at_10_max
value: 8.614136366133858
- type: nauc_map_at_10_std
value: -14.636265322683597
- type: nauc_map_at_1_diff1
value: 23.688109743445253
- type: nauc_map_at_1_max
value: 10.721419669570178
- type: nauc_map_at_1_std
value: -17.00198995751755
- type: nauc_map_at_20_diff1
value: 19.40994853288039
- type: nauc_map_at_20_max
value: 8.788561538894676
- type: nauc_map_at_20_std
value: -14.287595480928521
- type: nauc_map_at_3_diff1
value: 20.019246737479236
- type: nauc_map_at_3_max
value: 8.530000749651693
- type: nauc_map_at_3_std
value: -16.31053852110094
- type: nauc_map_at_5_diff1
value: 19.574801722611753
- type: nauc_map_at_5_max
value: 8.431256040109632
- type: nauc_map_at_5_std
value: -15.42991927435635
- type: nauc_mrr_at_1000_diff1
value: 19.199456594864415
- type: nauc_mrr_at_1000_max
value: 9.053366261880821
- type: nauc_mrr_at_1000_std
value: -14.325311358790312
- type: nauc_mrr_at_100_diff1
value: 19.183968461336264
- type: nauc_mrr_at_100_max
value: 9.0406708211084
- type: nauc_mrr_at_100_std
value: -14.333168371749
- type: nauc_mrr_at_10_diff1
value: 19.286280952658004
- type: nauc_mrr_at_10_max
value: 8.786679451075301
- type: nauc_mrr_at_10_std
value: -14.85433165190137
- type: nauc_mrr_at_1_diff1
value: 23.372945217632637
- type: nauc_mrr_at_1_max
value: 10.757009456320713
- type: nauc_mrr_at_1_std
value: -17.37470573558239
- type: nauc_mrr_at_20_diff1
value: 19.204260097760162
- type: nauc_mrr_at_20_max
value: 8.967269936629057
- type: nauc_mrr_at_20_std
value: -14.556203577633491
- type: nauc_mrr_at_3_diff1
value: 19.802237510569196
- type: nauc_mrr_at_3_max
value: 8.660412322072549
- type: nauc_mrr_at_3_std
value: -16.483667365878983
- type: nauc_mrr_at_5_diff1
value: 19.417190218500963
- type: nauc_mrr_at_5_max
value: 8.592050482160923
- type: nauc_mrr_at_5_std
value: -15.666970940052721
- type: nauc_ndcg_at_1000_diff1
value: 17.770326257033936
- type: nauc_ndcg_at_1000_max
value: 9.986868282212038
- type: nauc_ndcg_at_1000_std
value: -9.378246687942493
- type: nauc_ndcg_at_100_diff1
value: 17.57851695979306
- type: nauc_ndcg_at_100_max
value: 9.516456101829059
- type: nauc_ndcg_at_100_std
value: -9.92852108588332
- type: nauc_ndcg_at_10_diff1
value: 18.211042534939516
- type: nauc_ndcg_at_10_max
value: 8.263500593038305
- type: nauc_ndcg_at_10_std
value: -12.860334730832001
- type: nauc_ndcg_at_1_diff1
value: 23.372945217632637
- type: nauc_ndcg_at_1_max
value: 10.757009456320713
- type: nauc_ndcg_at_1_std
value: -17.37470573558239
- type: nauc_ndcg_at_20_diff1
value: 17.910709608958474
- type: nauc_ndcg_at_20_max
value: 8.893940446709529
- type: nauc_ndcg_at_20_std
value: -11.689263799945813
- type: nauc_ndcg_at_3_diff1
value: 19.09880112910806
- type: nauc_ndcg_at_3_max
value: 8.023263463318175
- type: nauc_ndcg_at_3_std
value: -16.092374418892373
- type: nauc_ndcg_at_5_diff1
value: 18.42900402442049
- type: nauc_ndcg_at_5_max
value: 7.8858287226066235
- type: nauc_ndcg_at_5_std
value: -14.661280178399608
- type: nauc_precision_at_1000_diff1
value: 3.642347466781283
- type: nauc_precision_at_1000_max
value: 16.952404316587614
- type: nauc_precision_at_1000_std
value: 21.40131424089912
- type: nauc_precision_at_100_diff1
value: 9.750805732461842
- type: nauc_precision_at_100_max
value: 13.757879488937125
- type: nauc_precision_at_100_std
value: 8.039378982280406
- type: nauc_precision_at_10_diff1
value: 14.7918457440186
- type: nauc_precision_at_10_max
value: 8.123251440844076
- type: nauc_precision_at_10_std
value: -7.766522118292242
- type: nauc_precision_at_1_diff1
value: 23.372945217632637
- type: nauc_precision_at_1_max
value: 10.757009456320713
- type: nauc_precision_at_1_std
value: -17.37470573558239
- type: nauc_precision_at_20_diff1
value: 13.317651277911787
- type: nauc_precision_at_20_max
value: 10.204911801413331
- type: nauc_precision_at_20_std
value: -3.322012947463638
- type: nauc_precision_at_3_diff1
value: 16.938989829945534
- type: nauc_precision_at_3_max
value: 7.007727368306191
- type: nauc_precision_at_3_std
value: -15.264146253300096
- type: nauc_precision_at_5_diff1
value: 15.595830777905029
- type: nauc_precision_at_5_max
value: 6.87438645405223
- type: nauc_precision_at_5_std
value: -12.548740115098678
- type: nauc_recall_at_1000_diff1
value: 9.009543867034727
- type: nauc_recall_at_1000_max
value: 18.305044258577915
- type: nauc_recall_at_1000_std
value: 23.009148418514425
- type: nauc_recall_at_100_diff1
value: 11.15850015080056
- type: nauc_recall_at_100_max
value: 11.780408791390519
- type: nauc_recall_at_100_std
value: 6.246652097817795
- type: nauc_recall_at_10_diff1
value: 15.099829144415247
- type: nauc_recall_at_10_max
value: 7.075068492864811
- type: nauc_recall_at_10_std
value: -7.878092251138417
- type: nauc_recall_at_1_diff1
value: 23.688109743445253
- type: nauc_recall_at_1_max
value: 10.721419669570178
- type: nauc_recall_at_1_std
value: -17.00198995751755
- type: nauc_recall_at_20_diff1
value: 13.85704310580134
- type: nauc_recall_at_20_max
value: 9.007426388276338
- type: nauc_recall_at_20_std
value: -3.9997271157444843
- type: nauc_recall_at_3_diff1
value: 16.851129797737183
- type: nauc_recall_at_3_max
value: 6.616028659229676
- type: nauc_recall_at_3_std
value: -15.286301162412613
- type: nauc_recall_at_5_diff1
value: 15.671635716227339
- type: nauc_recall_at_5_max
value: 6.342388043913686
- type: nauc_recall_at_5_std
value: -12.39987752967968
- type: ndcg_at_1
value: 14.791000000000002
- type: ndcg_at_10
value: 25.006
- type: ndcg_at_100
value: 30.471999999999998
- type: ndcg_at_1000
value: 32.806000000000004
- type: ndcg_at_20
value: 27.058
- type: ndcg_at_3
value: 20.112
- type: ndcg_at_5
value: 22.413
- type: precision_at_1
value: 14.791000000000002
- type: precision_at_10
value: 4.055000000000001
- type: precision_at_100
value: 0.697
- type: precision_at_1000
value: 0.092
- type: precision_at_20
value: 2.465
- type: precision_at_3
value: 8.626000000000001
- type: precision_at_5
value: 6.382000000000001
- type: recall_at_1
value: 13.943
- type: recall_at_10
value: 37.397000000000006
- type: recall_at_100
value: 63.334999999999994
- type: recall_at_1000
value: 81.428
- type: recall_at_20
value: 45.358
- type: recall_at_3
value: 24.082
- type: recall_at_5
value: 29.563
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018 (default)
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 11.167
- type: map_at_1
value: 5.055
- type: map_at_10
value: 7.974
- type: map_at_100
value: 8.738
- type: map_at_1000
value: 8.916
- type: map_at_20
value: 8.341
- type: map_at_3
value: 6.857
- type: map_at_5
value: 7.5009999999999994
- type: mrr_at_1
value: 10.030864197530864
- type: mrr_at_10
value: 14.756087105624141
- type: mrr_at_100
value: 15.562190249516133
- type: mrr_at_1000
value: 15.69044643307793
- type: mrr_at_20
value: 15.164252290155286
- type: mrr_at_3
value: 13.297325102880658
- type: mrr_at_5
value: 14.130658436213992
- type: nauc_map_at_1000_diff1
value: 21.581584639641356
- type: nauc_map_at_1000_max
value: -3.591350057991658
- type: nauc_map_at_1000_std
value: 2.2450733180258466
- type: nauc_map_at_100_diff1
value: 21.678068750484663
- type: nauc_map_at_100_max
value: -3.754793884673454
- type: nauc_map_at_100_std
value: 2.1134125512643034
- type: nauc_map_at_10_diff1
value: 22.267707890250872
- type: nauc_map_at_10_max
value: -4.109027667129512
- type: nauc_map_at_10_std
value: 1.7397026170215282
- type: nauc_map_at_1_diff1
value: 24.393602819317127
- type: nauc_map_at_1_max
value: -5.463161484041758
- type: nauc_map_at_1_std
value: 3.4527844717330898
- type: nauc_map_at_20_diff1
value: 22.16603827194384
- type: nauc_map_at_20_max
value: -3.829133240985351
- type: nauc_map_at_20_std
value: 2.273305218017184
- type: nauc_map_at_3_diff1
value: 25.550971234557217
- type: nauc_map_at_3_max
value: -5.912131631375139
- type: nauc_map_at_3_std
value: 2.6270431833752226
- type: nauc_map_at_5_diff1
value: 23.693227817850918
- type: nauc_map_at_5_max
value: -4.430117256044587
- type: nauc_map_at_5_std
value: 1.90476330618582
- type: nauc_mrr_at_1000_diff1
value: 18.407848757651383
- type: nauc_mrr_at_1000_max
value: 1.4692643101259266
- type: nauc_mrr_at_1000_std
value: -1.4737021198395484
- type: nauc_mrr_at_100_diff1
value: 18.373936364611946
- type: nauc_mrr_at_100_max
value: 1.4600491055347338
- type: nauc_mrr_at_100_std
value: -1.5315816773226647
- type: nauc_mrr_at_10_diff1
value: 18.812075225359994
- type: nauc_mrr_at_10_max
value: 1.1423422260007967
- type: nauc_mrr_at_10_std
value: -1.4331421942145333
- type: nauc_mrr_at_1_diff1
value: 21.042020105537055
- type: nauc_mrr_at_1_max
value: -1.8286330117738627
- type: nauc_mrr_at_1_std
value: 0.6107108684145417
- type: nauc_mrr_at_20_diff1
value: 18.67480478225173
- type: nauc_mrr_at_20_max
value: 1.262037517477333
- type: nauc_mrr_at_20_std
value: -1.3030974525400356
- type: nauc_mrr_at_3_diff1
value: 20.263359986054837
- type: nauc_mrr_at_3_max
value: -0.3775317483949404
- type: nauc_mrr_at_3_std
value: -1.365236958935102
- type: nauc_mrr_at_5_diff1
value: 19.555216165143772
- type: nauc_mrr_at_5_max
value: 0.364621169263337
- type: nauc_mrr_at_5_std
value: -1.0513020604553038
- type: nauc_ndcg_at_1000_diff1
value: 15.768274611971735
- type: nauc_ndcg_at_1000_max
value: 2.0520976478520327
- type: nauc_ndcg_at_1000_std
value: 2.877627036243521
- type: nauc_ndcg_at_100_diff1
value: 16.128663871942763
- type: nauc_ndcg_at_100_max
value: -0.34227560585178396
- type: nauc_ndcg_at_100_std
value: 0.8164780238765409
- type: nauc_ndcg_at_10_diff1
value: 19.282198569420846
- type: nauc_ndcg_at_10_max
value: -1.3250908207898342
- type: nauc_ndcg_at_10_std
value: 0.28825143098016265
- type: nauc_ndcg_at_1_diff1
value: 21.042020105537055
- type: nauc_ndcg_at_1_max
value: -1.8286330117738627
- type: nauc_ndcg_at_1_std
value: 0.6107108684145417
- type: nauc_ndcg_at_20_diff1
value: 19.028654575882847
- type: nauc_ndcg_at_20_max
value: -0.9325610304848784
- type: nauc_ndcg_at_20_std
value: 1.5749962746078057
- type: nauc_ndcg_at_3_diff1
value: 21.864688221213875
- type: nauc_ndcg_at_3_max
value: -2.6883486751081693
- type: nauc_ndcg_at_3_std
value: 0.17632918486246743
- type: nauc_ndcg_at_5_diff1
value: 21.280319590515656
- type: nauc_ndcg_at_5_max
value: -1.7628672417522795
- type: nauc_ndcg_at_5_std
value: 0.35504411508050127
- type: nauc_precision_at_1000_diff1
value: -5.134118935123325
- type: nauc_precision_at_1000_max
value: 22.854317653101646
- type: nauc_precision_at_1000_std
value: -5.519945670535999
- type: nauc_precision_at_100_diff1
value: 2.410623305126647
- type: nauc_precision_at_100_max
value: 11.323949150994391
- type: nauc_precision_at_100_std
value: -4.4400164174748395
- type: nauc_precision_at_10_diff1
value: 11.14562925123435
- type: nauc_precision_at_10_max
value: 6.701684471603129
- type: nauc_precision_at_10_std
value: -3.507090397196342
- type: nauc_precision_at_1_diff1
value: 21.042020105537055
- type: nauc_precision_at_1_max
value: -1.8286330117738627
- type: nauc_precision_at_1_std
value: 0.6107108684145417
- type: nauc_precision_at_20_diff1
value: 10.58098788224169
- type: nauc_precision_at_20_max
value: 7.5107799297769935
- type: nauc_precision_at_20_std
value: -1.5100106529478114
- type: nauc_precision_at_3_diff1
value: 19.795198818057667
- type: nauc_precision_at_3_max
value: 0.4713854827815967
- type: nauc_precision_at_3_std
value: -3.125924766538086
- type: nauc_precision_at_5_diff1
value: 16.907379789095696
- type: nauc_precision_at_5_max
value: 4.140243156305644
- type: nauc_precision_at_5_std
value: -1.8178346354290582
- type: nauc_recall_at_1000_diff1
value: 4.711761259530349
- type: nauc_recall_at_1000_max
value: 3.897303116005553
- type: nauc_recall_at_1000_std
value: 14.259168849028104
- type: nauc_recall_at_100_diff1
value: 4.811342813866857
- type: nauc_recall_at_100_max
value: -0.46422331209391143
- type: nauc_recall_at_100_std
value: 1.702190380676355
- type: nauc_recall_at_10_diff1
value: 14.112982578958079
- type: nauc_recall_at_10_max
value: -0.6934250965951679
- type: nauc_recall_at_10_std
value: -0.19882683954238423
- type: nauc_recall_at_1_diff1
value: 24.393602819317127
- type: nauc_recall_at_1_max
value: -5.463161484041758
- type: nauc_recall_at_1_std
value: 3.4527844717330898
- type: nauc_recall_at_20_diff1
value: 13.19557557901834
- type: nauc_recall_at_20_max
value: 0.1538644708778628
- type: nauc_recall_at_20_std
value: 3.0492797001932974
- type: nauc_recall_at_3_diff1
value: 24.182210704492558
- type: nauc_recall_at_3_max
value: -6.034324229051654
- type: nauc_recall_at_3_std
value: 2.8490090980023637
- type: nauc_recall_at_5_diff1
value: 19.011063131073744
- type: nauc_recall_at_5_max
value: -2.119359618883548
- type: nauc_recall_at_5_std
value: 0.8198903805407032
- type: ndcg_at_1
value: 10.030999999999999
- type: ndcg_at_10
value: 11.167
- type: ndcg_at_100
value: 15.409
- type: ndcg_at_1000
value: 19.947
- type: ndcg_at_20
value: 12.483
- type: ndcg_at_3
value: 9.532
- type: ndcg_at_5
value: 10.184
- type: precision_at_1
value: 10.030999999999999
- type: precision_at_10
value: 3.1329999999999996
- type: precision_at_100
value: 0.7270000000000001
- type: precision_at_1000
value: 0.15
- type: precision_at_20
value: 2.06
- type: precision_at_3
value: 6.481000000000001
- type: precision_at_5
value: 4.877
- type: recall_at_1
value: 5.055
- type: recall_at_10
value: 14.193
- type: recall_at_100
value: 31.47
- type: recall_at_1000
value: 60.007
- type: recall_at_20
value: 18.532
- type: recall_at_3
value: 8.863999999999999
- type: recall_at_5
value: 11.354000000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA (default)
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 30.837999999999997
- type: map_at_1
value: 17.535
- type: map_at_10
value: 24.127000000000002
- type: map_at_100
value: 24.897
- type: map_at_1000
value: 24.991
- type: map_at_20
value: 24.537
- type: map_at_3
value: 22.314
- type: map_at_5
value: 23.369
- type: mrr_at_1
value: 35.07089804186361
- type: mrr_at_10
value: 41.84109835696607
- type: mrr_at_100
value: 42.50312939357189
- type: mrr_at_1000
value: 42.557192847100204
- type: mrr_at_20
value: 42.23392771922393
- type: mrr_at_3
value: 40.0540175557057
- type: mrr_at_5
value: 41.09723160027011
- type: nauc_map_at_1000_diff1
value: 53.405765033756104
- type: nauc_map_at_1000_max
value: 7.122736293690594
- type: nauc_map_at_1000_std
value: 25.154222353909706
- type: nauc_map_at_100_diff1
value: 53.424105025391235
- type: nauc_map_at_100_max
value: 7.127661247301736
- type: nauc_map_at_100_std
value: 25.080306702030054
- type: nauc_map_at_10_diff1
value: 53.83507469889932
- type: nauc_map_at_10_max
value: 7.239978390454264
- type: nauc_map_at_10_std
value: 24.216110502987867
- type: nauc_map_at_1_diff1
value: 64.45610830977103
- type: nauc_map_at_1_max
value: 10.831236114417758
- type: nauc_map_at_1_std
value: 18.282463736681766
- type: nauc_map_at_20_diff1
value: 53.50246555744542
- type: nauc_map_at_20_max
value: 7.1666672586766085
- type: nauc_map_at_20_std
value: 24.648695320801803
- type: nauc_map_at_3_diff1
value: 55.467529631560474
- type: nauc_map_at_3_max
value: 8.281275214726968
- type: nauc_map_at_3_std
value: 22.436972833181386
- type: nauc_map_at_5_diff1
value: 54.2596974292177
- type: nauc_map_at_5_max
value: 7.5791705198322585
- type: nauc_map_at_5_std
value: 23.272036332669295
- type: nauc_mrr_at_1000_diff1
value: 60.01986079158693
- type: nauc_mrr_at_1000_max
value: 9.046571417308733
- type: nauc_mrr_at_1000_std
value: 22.078576232724707
- type: nauc_mrr_at_100_diff1
value: 60.01145860886984
- type: nauc_mrr_at_100_max
value: 9.036448042324515
- type: nauc_mrr_at_100_std
value: 22.073613864801413
- type: nauc_mrr_at_10_diff1
value: 60.138490480821595
- type: nauc_mrr_at_10_max
value: 9.09851806151594
- type: nauc_mrr_at_10_std
value: 21.871816692853095
- type: nauc_mrr_at_1_diff1
value: 64.45610830977103
- type: nauc_mrr_at_1_max
value: 10.831236114417758
- type: nauc_mrr_at_1_std
value: 18.282463736681766
- type: nauc_mrr_at_20_diff1
value: 60.020756965348596
- type: nauc_mrr_at_20_max
value: 9.067384772615947
- type: nauc_mrr_at_20_std
value: 22.007284296200602
- type: nauc_mrr_at_3_diff1
value: 60.848848858927965
- type: nauc_mrr_at_3_max
value: 9.77819590832476
- type: nauc_mrr_at_3_std
value: 20.7857772481929
- type: nauc_mrr_at_5_diff1
value: 60.23023654313581
- type: nauc_mrr_at_5_max
value: 9.297697720996952
- type: nauc_mrr_at_5_std
value: 21.305246554366864
- type: nauc_ndcg_at_1000_diff1
value: 51.9050817941371
- type: nauc_ndcg_at_1000_max
value: 6.253060051785559
- type: nauc_ndcg_at_1000_std
value: 29.724428357103015
- type: nauc_ndcg_at_100_diff1
value: 52.197825295468256
- type: nauc_ndcg_at_100_max
value: 6.212784383093877
- type: nauc_ndcg_at_100_std
value: 28.65006820758606
- type: nauc_ndcg_at_10_diff1
value: 53.6117173506942
- type: nauc_ndcg_at_10_max
value: 6.6792682572264646
- type: nauc_ndcg_at_10_std
value: 25.56356291488488
- type: nauc_ndcg_at_1_diff1
value: 64.45610830977103
- type: nauc_ndcg_at_1_max
value: 10.831236114417758
- type: nauc_ndcg_at_1_std
value: 18.282463736681766
- type: nauc_ndcg_at_20_diff1
value: 52.725481130189465
- type: nauc_ndcg_at_20_max
value: 6.443880761918098
- type: nauc_ndcg_at_20_std
value: 26.623544659694815
- type: nauc_ndcg_at_3_diff1
value: 56.087927881432066
- type: nauc_ndcg_at_3_max
value: 8.38309550543212
- type: nauc_ndcg_at_3_std
value: 22.573762514655623
- type: nauc_ndcg_at_5_diff1
value: 54.351073912334144
- type: nauc_ndcg_at_5_max
value: 7.325834612406898
- type: nauc_ndcg_at_5_std
value: 23.7625099537027
- type: nauc_precision_at_1000_diff1
value: 24.555760070632065
- type: nauc_precision_at_1000_max
value: -0.030378364610462727
- type: nauc_precision_at_1000_std
value: 43.44197980424529
- type: nauc_precision_at_100_diff1
value: 31.89263750680818
- type: nauc_precision_at_100_max
value: 0.5967214311073074
- type: nauc_precision_at_100_std
value: 38.028330866223165
- type: nauc_precision_at_10_diff1
value: 42.72001946616996
- type: nauc_precision_at_10_max
value: 2.759405409849438
- type: nauc_precision_at_10_std
value: 29.948179807406504
- type: nauc_precision_at_1_diff1
value: 64.45610830977103
- type: nauc_precision_at_1_max
value: 10.831236114417758
- type: nauc_precision_at_1_std
value: 18.282463736681766
- type: nauc_precision_at_20_diff1
value: 38.77807631886789
- type: nauc_precision_at_20_max
value: 1.8720818516278552
- type: nauc_precision_at_20_std
value: 32.59464097769524
- type: nauc_precision_at_3_diff1
value: 50.84352281110305
- type: nauc_precision_at_3_max
value: 6.8098905022703455
- type: nauc_precision_at_3_std
value: 24.54656806570455
- type: nauc_precision_at_5_diff1
value: 46.09980845642094
- type: nauc_precision_at_5_max
value: 4.489864393832119
- type: nauc_precision_at_5_std
value: 26.34146412719015
- type: nauc_recall_at_1000_diff1
value: 24.55576007063215
- type: nauc_recall_at_1000_max
value: -0.030378364610333563
- type: nauc_recall_at_1000_std
value: 43.441979804245264
- type: nauc_recall_at_100_diff1
value: 31.892637506808146
- type: nauc_recall_at_100_max
value: 0.5967214311073054
- type: nauc_recall_at_100_std
value: 38.02833086622307
- type: nauc_recall_at_10_diff1
value: 42.72001946616998
- type: nauc_recall_at_10_max
value: 2.7594054098494403
- type: nauc_recall_at_10_std
value: 29.94817980740652
- type: nauc_recall_at_1_diff1
value: 64.45610830977103
- type: nauc_recall_at_1_max
value: 10.831236114417758
- type: nauc_recall_at_1_std
value: 18.282463736681766
- type: nauc_recall_at_20_diff1
value: 38.77807631886782
- type: nauc_recall_at_20_max
value: 1.872081851627872
- type: nauc_recall_at_20_std
value: 32.594640977695256
- type: nauc_recall_at_3_diff1
value: 50.843522811103036
- type: nauc_recall_at_3_max
value: 6.809890502270356
- type: nauc_recall_at_3_std
value: 24.546568065704555
- type: nauc_recall_at_5_diff1
value: 46.09980845642094
- type: nauc_recall_at_5_max
value: 4.48986439383211
- type: nauc_recall_at_5_std
value: 26.341464127190157
- type: ndcg_at_1
value: 35.071000000000005
- type: ndcg_at_10
value: 30.837999999999997
- type: ndcg_at_100
value: 34.473
- type: ndcg_at_1000
value: 36.788
- type: ndcg_at_20
value: 32.193
- type: ndcg_at_3
value: 27.412999999999997
- type: ndcg_at_5
value: 29.160999999999998
- type: precision_at_1
value: 35.071000000000005
- type: precision_at_10
value: 6.694999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.127
- type: precision_at_20
value: 3.785
- type: precision_at_3
value: 17.187
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 17.535
- type: recall_at_10
value: 33.477000000000004
- type: recall_at_100
value: 48.015
- type: recall_at_1000
value: 63.483999999999995
- type: recall_at_20
value: 37.846000000000004
- type: recall_at_3
value: 25.779999999999998
- type: recall_at_5
value: 29.250999999999998
- task:
type: Classification
dataset:
name: MTEB ImdbClassification (default)
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 66.5616
- type: ap
value: 61.38581579080602
- type: ap_weighted
value: 61.38581579080602
- type: f1
value: 66.15361405073979
- type: f1_weighted
value: 66.15361405073978
- type: main_score
value: 66.5616
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO (default)
type: mteb/msmarco
config: default
split: test
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 28.034
- type: map_at_1
value: 0.66
- type: map_at_10
value: 4.3709999999999996
- type: map_at_100
value: 12.02
- type: map_at_1000
value: 15.081
- type: map_at_20
value: 6.718
- type: map_at_3
value: 1.7389999999999999
- type: map_at_5
value: 2.5919999999999996
- type: mrr_at_1
value: 41.86046511627907
- type: mrr_at_10
value: 54.15651531930602
- type: mrr_at_100
value: 54.68712248786739
- type: mrr_at_1000
value: 54.68712248786739
- type: mrr_at_20
value: 54.272794389073454
- type: mrr_at_3
value: 51.937984496124024
- type: mrr_at_5
value: 52.40310077519379
- type: nauc_map_at_1000_diff1
value: 8.067177552562086
- type: nauc_map_at_1000_max
value: 50.80997888655191
- type: nauc_map_at_1000_std
value: 55.48450092063327
- type: nauc_map_at_100_diff1
value: 11.852088152898117
- type: nauc_map_at_100_max
value: 48.192262801076275
- type: nauc_map_at_100_std
value: 46.99716861803027
- type: nauc_map_at_10_diff1
value: 12.440097979884552
- type: nauc_map_at_10_max
value: 29.873253516213786
- type: nauc_map_at_10_std
value: 30.42960299808594
- type: nauc_map_at_1_diff1
value: 34.552395254431445
- type: nauc_map_at_1_max
value: 38.69572501766299
- type: nauc_map_at_1_std
value: 23.493916737503017
- type: nauc_map_at_20_diff1
value: 13.785974512045621
- type: nauc_map_at_20_max
value: 34.54060954861762
- type: nauc_map_at_20_std
value: 36.78361062739522
- type: nauc_map_at_3_diff1
value: 25.396598443628488
- type: nauc_map_at_3_max
value: 40.38715214284343
- type: nauc_map_at_3_std
value: 25.366480567034372
- type: nauc_map_at_5_diff1
value: 21.758905499107037
- type: nauc_map_at_5_max
value: 35.664518863717646
- type: nauc_map_at_5_std
value: 27.149202253810024
- type: nauc_mrr_at_1000_diff1
value: 17.603886573367394
- type: nauc_mrr_at_1000_max
value: 58.66874119428572
- type: nauc_mrr_at_1000_std
value: 42.279175325006555
- type: nauc_mrr_at_100_diff1
value: 17.603886573367394
- type: nauc_mrr_at_100_max
value: 58.66874119428572
- type: nauc_mrr_at_100_std
value: 42.279175325006555
- type: nauc_mrr_at_10_diff1
value: 17.323803643197643
- type: nauc_mrr_at_10_max
value: 58.762972566248315
- type: nauc_mrr_at_10_std
value: 42.56956515834332
- type: nauc_mrr_at_1_diff1
value: 27.861672627434668
- type: nauc_mrr_at_1_max
value: 62.257123563504756
- type: nauc_mrr_at_1_std
value: 44.379176486800986
- type: nauc_mrr_at_20_diff1
value: 17.44644565955209
- type: nauc_mrr_at_20_max
value: 58.58190663195971
- type: nauc_mrr_at_20_std
value: 42.33627290946193
- type: nauc_mrr_at_3_diff1
value: 17.262663278109798
- type: nauc_mrr_at_3_max
value: 56.454793834736094
- type: nauc_mrr_at_3_std
value: 41.08451346276091
- type: nauc_mrr_at_5_diff1
value: 16.613650570034434
- type: nauc_mrr_at_5_max
value: 55.66285623344173
- type: nauc_mrr_at_5_std
value: 40.38311275408144
- type: nauc_ndcg_at_1000_diff1
value: 10.174068866047635
- type: nauc_ndcg_at_1000_max
value: 51.73192889106936
- type: nauc_ndcg_at_1000_std
value: 59.65401111712334
- type: nauc_ndcg_at_100_diff1
value: 7.828653579924433
- type: nauc_ndcg_at_100_max
value: 54.36206806281852
- type: nauc_ndcg_at_100_std
value: 44.08756682730974
- type: nauc_ndcg_at_10_diff1
value: 3.1020204706672807
- type: nauc_ndcg_at_10_max
value: 49.25209127878138
- type: nauc_ndcg_at_10_std
value: 39.03800796651823
- type: nauc_ndcg_at_1_diff1
value: 31.384674368521292
- type: nauc_ndcg_at_1_max
value: 46.68691593258891
- type: nauc_ndcg_at_1_std
value: 23.497422044367447
- type: nauc_ndcg_at_20_diff1
value: 2.1223938698830445
- type: nauc_ndcg_at_20_max
value: 52.82778912003725
- type: nauc_ndcg_at_20_std
value: 40.85957147213028
- type: nauc_ndcg_at_3_diff1
value: 15.620541244360142
- type: nauc_ndcg_at_3_max
value: 53.11313758866487
- type: nauc_ndcg_at_3_std
value: 30.214636563641196
- type: nauc_ndcg_at_5_diff1
value: 11.094092047013888
- type: nauc_ndcg_at_5_max
value: 50.15717166769855
- type: nauc_ndcg_at_5_std
value: 32.63549193285381
- type: nauc_precision_at_1000_diff1
value: -18.87788252321529
- type: nauc_precision_at_1000_max
value: 47.752842936932964
- type: nauc_precision_at_1000_std
value: 46.53172081645067
- type: nauc_precision_at_100_diff1
value: -11.675608943686981
- type: nauc_precision_at_100_max
value: 57.37789290450161
- type: nauc_precision_at_100_std
value: 45.99043825302317
- type: nauc_precision_at_10_diff1
value: -5.316480906785367
- type: nauc_precision_at_10_max
value: 50.9022661670284
- type: nauc_precision_at_10_std
value: 41.249198804648444
- type: nauc_precision_at_1_diff1
value: 27.861672627434668
- type: nauc_precision_at_1_max
value: 62.257123563504756
- type: nauc_precision_at_1_std
value: 44.379176486800986
- type: nauc_precision_at_20_diff1
value: -4.546893782120849
- type: nauc_precision_at_20_max
value: 54.59631672833982
- type: nauc_precision_at_20_std
value: 42.784497023294186
- type: nauc_precision_at_3_diff1
value: 9.61605571022061
- type: nauc_precision_at_3_max
value: 58.49382945748053
- type: nauc_precision_at_3_std
value: 36.589164698407316
- type: nauc_precision_at_5_diff1
value: 4.337255192132767
- type: nauc_precision_at_5_max
value: 51.9951147484678
- type: nauc_precision_at_5_std
value: 34.468467294436486
- type: nauc_recall_at_1000_diff1
value: 12.99503296673786
- type: nauc_recall_at_1000_max
value: 40.71962531328987
- type: nauc_recall_at_1000_std
value: 61.64030151991186
- type: nauc_recall_at_100_diff1
value: 10.859337421704575
- type: nauc_recall_at_100_max
value: 38.842397587549044
- type: nauc_recall_at_100_std
value: 44.123802055364514
- type: nauc_recall_at_10_diff1
value: 5.054631656084283
- type: nauc_recall_at_10_max
value: 16.616637058750165
- type: nauc_recall_at_10_std
value: 23.85056756316223
- type: nauc_recall_at_1_diff1
value: 34.552395254431445
- type: nauc_recall_at_1_max
value: 38.69572501766299
- type: nauc_recall_at_1_std
value: 23.493916737503017
- type: nauc_recall_at_20_diff1
value: 11.266581564744333
- type: nauc_recall_at_20_max
value: 20.205268245387963
- type: nauc_recall_at_20_std
value: 25.000674179475464
- type: nauc_recall_at_3_diff1
value: 23.716522929925635
- type: nauc_recall_at_3_max
value: 33.675409791018915
- type: nauc_recall_at_3_std
value: 23.659590089606255
- type: nauc_recall_at_5_diff1
value: 13.826629690116377
- type: nauc_recall_at_5_max
value: 21.450396058089545
- type: nauc_recall_at_5_std
value: 21.053365906790678
- type: ndcg_at_1
value: 27.907
- type: ndcg_at_10
value: 28.034
- type: ndcg_at_100
value: 28.166000000000004
- type: ndcg_at_1000
value: 36.361
- type: ndcg_at_20
value: 28.047
- type: ndcg_at_3
value: 28.388999999999996
- type: ndcg_at_5
value: 28.307
- type: precision_at_1
value: 41.86
- type: precision_at_10
value: 37.208999999999996
- type: precision_at_100
value: 18.093
- type: precision_at_1000
value: 3.995
- type: precision_at_20
value: 33.372
- type: precision_at_3
value: 42.636
- type: precision_at_5
value: 40.0
- type: recall_at_1
value: 0.66
- type: recall_at_10
value: 6.287
- type: recall_at_100
value: 24.134
- type: recall_at_1000
value: 48.431999999999995
- type: recall_at_20
value: 10.897
- type: recall_at_3
value: 2.138
- type: recall_at_5
value: 3.3770000000000002
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 84.81988144094848
- type: f1
value: 84.06333895718355
- type: f1_weighted
value: 84.95181538630469
- type: main_score
value: 84.81988144094848
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 62.41222070223438
- type: f1
value: 46.156097858146175
- type: f1_weighted
value: 66.23266420473301
- type: main_score
value: 62.41222070223438
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 62.50168123739073
- type: f1
value: 60.72805496384179
- type: f1_weighted
value: 62.787680759907204
- type: main_score
value: 62.50168123739073
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 66.09280430396772
- type: f1
value: 65.36448769357172
- type: f1_weighted
value: 66.15203456480924
- type: main_score
value: 66.09280430396772
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P (default)
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 26.932942933622616
- type: v_measure
value: 26.932942933622616
- type: v_measure_std
value: 1.593124055965666
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S (default)
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 22.9594415386389
- type: v_measure
value: 22.9594415386389
- type: v_measure_std
value: 1.2719806552652395
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking (default)
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 28.527234738258063
- type: map
value: 28.527234738258063
- type: mrr
value: 29.001137590751057
- type: nAUC_map_diff1
value: 17.894640005397015
- type: nAUC_map_max
value: -32.33772009018379
- type: nAUC_map_std
value: -13.932018270818118
- type: nAUC_mrr_diff1
value: 16.6645956799536
- type: nAUC_mrr_max
value: -26.591327847291947
- type: nAUC_mrr_std
value: -11.52072949105865
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus (default)
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 23.318
- type: map_at_1
value: 3.9739999999999998
- type: map_at_10
value: 7.636
- type: map_at_100
value: 9.565999999999999
- type: map_at_1000
value: 10.731
- type: map_at_20
value: 8.389000000000001
- type: map_at_3
value: 5.836
- type: map_at_5
value: 6.6339999999999995
- type: mrr_at_1
value: 31.57894736842105
- type: mrr_at_10
value: 41.40436876504987
- type: mrr_at_100
value: 42.171381521810616
- type: mrr_at_1000
value: 42.21952740910268
- type: mrr_at_20
value: 41.75160733542153
- type: mrr_at_3
value: 38.544891640866865
- type: mrr_at_5
value: 40.495356037151694
- type: nauc_map_at_1000_diff1
value: 36.856779722587405
- type: nauc_map_at_1000_max
value: 1.0732856849015824
- type: nauc_map_at_1000_std
value: 9.651983758926798
- type: nauc_map_at_100_diff1
value: 37.7388774830525
- type: nauc_map_at_100_max
value: 0.5350831297890865
- type: nauc_map_at_100_std
value: 5.572219889903966
- type: nauc_map_at_10_diff1
value: 41.10439950831827
- type: nauc_map_at_10_max
value: -1.9365518645162703
- type: nauc_map_at_10_std
value: -0.14823142437775177
- type: nauc_map_at_1_diff1
value: 45.5844553027814
- type: nauc_map_at_1_max
value: -8.272551322248038
- type: nauc_map_at_1_std
value: -5.988582518897944
- type: nauc_map_at_20_diff1
value: 38.99926603388708
- type: nauc_map_at_20_max
value: -0.8765984795564569
- type: nauc_map_at_20_std
value: 1.8427808317285952
- type: nauc_map_at_3_diff1
value: 44.541009820342296
- type: nauc_map_at_3_max
value: -5.314865046137034
- type: nauc_map_at_3_std
value: -4.401240111896542
- type: nauc_map_at_5_diff1
value: 43.93142627220787
- type: nauc_map_at_5_max
value: -4.452186699937273
- type: nauc_map_at_5_std
value: -1.926768039888005
- type: nauc_mrr_at_1000_diff1
value: 31.753283629515227
- type: nauc_mrr_at_1000_max
value: 9.689948388217696
- type: nauc_mrr_at_1000_std
value: 22.70267321039036
- type: nauc_mrr_at_100_diff1
value: 31.729775359589773
- type: nauc_mrr_at_100_max
value: 9.729637548794349
- type: nauc_mrr_at_100_std
value: 22.680656825829267
- type: nauc_mrr_at_10_diff1
value: 31.725910736285666
- type: nauc_mrr_at_10_max
value: 9.676299619743284
- type: nauc_mrr_at_10_std
value: 22.987975982720496
- type: nauc_mrr_at_1_diff1
value: 33.222931085618626
- type: nauc_mrr_at_1_max
value: 3.484453564278958
- type: nauc_mrr_at_1_std
value: 14.566253883401012
- type: nauc_mrr_at_20_diff1
value: 31.70316773246007
- type: nauc_mrr_at_20_max
value: 9.857726052213023
- type: nauc_mrr_at_20_std
value: 22.691706596582133
- type: nauc_mrr_at_3_diff1
value: 33.123605268114545
- type: nauc_mrr_at_3_max
value: 7.595554226164336
- type: nauc_mrr_at_3_std
value: 22.833951307229185
- type: nauc_mrr_at_5_diff1
value: 32.33356989096538
- type: nauc_mrr_at_5_max
value: 8.78887950599465
- type: nauc_mrr_at_5_std
value: 23.75577044154664
- type: nauc_ndcg_at_1000_diff1
value: 29.06381153030341
- type: nauc_ndcg_at_1000_max
value: 12.496787837448844
- type: nauc_ndcg_at_1000_std
value: 21.957810402478064
- type: nauc_ndcg_at_100_diff1
value: 30.705847017840128
- type: nauc_ndcg_at_100_max
value: 7.14809714223451
- type: nauc_ndcg_at_100_std
value: 17.218742555337656
- type: nauc_ndcg_at_10_diff1
value: 28.03996243029464
- type: nauc_ndcg_at_10_max
value: 4.699374701730214
- type: nauc_ndcg_at_10_std
value: 24.227816808454218
- type: nauc_ndcg_at_1_diff1
value: 33.51847942809358
- type: nauc_ndcg_at_1_max
value: -0.15139755316818274
- type: nauc_ndcg_at_1_std
value: 17.16967561523347
- type: nauc_ndcg_at_20_diff1
value: 28.20952557682163
- type: nauc_ndcg_at_20_max
value: 4.145398659710493
- type: nauc_ndcg_at_20_std
value: 22.993088607717066
- type: nauc_ndcg_at_3_diff1
value: 27.613082038987592
- type: nauc_ndcg_at_3_max
value: 1.4593269064387369
- type: nauc_ndcg_at_3_std
value: 23.50820643331994
- type: nauc_ndcg_at_5_diff1
value: 28.240414065564686
- type: nauc_ndcg_at_5_max
value: 3.5129825777351504
- type: nauc_ndcg_at_5_std
value: 25.518429908335165
- type: nauc_precision_at_1000_diff1
value: 3.744031922083433
- type: nauc_precision_at_1000_max
value: -0.5091331293991512
- type: nauc_precision_at_1000_std
value: 44.81402869309276
- type: nauc_precision_at_100_diff1
value: 6.830797386827996
- type: nauc_precision_at_100_max
value: 4.0810548509653755
- type: nauc_precision_at_100_std
value: 42.7474662572479
- type: nauc_precision_at_10_diff1
value: 12.394335511926892
- type: nauc_precision_at_10_max
value: 10.49971612535947
- type: nauc_precision_at_10_std
value: 34.03347850666832
- type: nauc_precision_at_1_diff1
value: 33.222931085618626
- type: nauc_precision_at_1_max
value: 3.484453564278958
- type: nauc_precision_at_1_std
value: 14.566253883401012
- type: nauc_precision_at_20_diff1
value: 9.64344422081397
- type: nauc_precision_at_20_max
value: 6.621958244946981
- type: nauc_precision_at_20_std
value: 37.86581516903579
- type: nauc_precision_at_3_diff1
value: 20.278708738039267
- type: nauc_precision_at_3_max
value: 7.392289389157268
- type: nauc_precision_at_3_std
value: 27.036426818980896
- type: nauc_precision_at_5_diff1
value: 18.449282750023514
- type: nauc_precision_at_5_max
value: 9.979980772916283
- type: nauc_precision_at_5_std
value: 33.01802732071948
- type: nauc_recall_at_1000_diff1
value: 16.342561945689592
- type: nauc_recall_at_1000_max
value: 5.937671266428497
- type: nauc_recall_at_1000_std
value: 10.42918010425554
- type: nauc_recall_at_100_diff1
value: 19.13895811746396
- type: nauc_recall_at_100_max
value: 3.153899391811738
- type: nauc_recall_at_100_std
value: 1.04689826072118
- type: nauc_recall_at_10_diff1
value: 30.635745816653586
- type: nauc_recall_at_10_max
value: 1.5673249988390006
- type: nauc_recall_at_10_std
value: -3.6633108112395276
- type: nauc_recall_at_1_diff1
value: 45.5844553027814
- type: nauc_recall_at_1_max
value: -8.272551322248038
- type: nauc_recall_at_1_std
value: -5.988582518897944
- type: nauc_recall_at_20_diff1
value: 24.449469640898666
- type: nauc_recall_at_20_max
value: 3.6319822015373404
- type: nauc_recall_at_20_std
value: -3.460880541269202
- type: nauc_recall_at_3_diff1
value: 40.57120118352399
- type: nauc_recall_at_3_max
value: -6.4276251434173135
- type: nauc_recall_at_3_std
value: -5.987479062691147
- type: nauc_recall_at_5_diff1
value: 36.21768314516704
- type: nauc_recall_at_5_max
value: -4.847092890211095
- type: nauc_recall_at_5_std
value: -3.0514943484880144
- type: ndcg_at_1
value: 29.876
- type: ndcg_at_10
value: 23.318
- type: ndcg_at_100
value: 22.178
- type: ndcg_at_1000
value: 31.543
- type: ndcg_at_20
value: 21.718
- type: ndcg_at_3
value: 26.625
- type: ndcg_at_5
value: 25.412000000000003
- type: precision_at_1
value: 31.579
- type: precision_at_10
value: 17.244999999999997
- type: precision_at_100
value: 5.82
- type: precision_at_1000
value: 1.857
- type: precision_at_20
value: 12.709000000000001
- type: precision_at_3
value: 24.974
- type: precision_at_5
value: 21.981
- type: recall_at_1
value: 3.9739999999999998
- type: recall_at_10
value: 11.433
- type: recall_at_100
value: 24.861
- type: recall_at_1000
value: 57.75900000000001
- type: recall_at_20
value: 14.167
- type: recall_at_3
value: 6.773999999999999
- type: recall_at_5
value: 8.713
- task:
type: Retrieval
dataset:
name: MTEB NQ (default)
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 17.682000000000002
- type: map_at_1
value: 7.968999999999999
- type: map_at_10
value: 13.828
- type: map_at_100
value: 14.881
- type: map_at_1000
value: 14.979999999999999
- type: map_at_20
value: 14.421999999999999
- type: map_at_3
value: 11.681999999999999
- type: map_at_5
value: 12.837000000000002
- type: mrr_at_1
value: 9.096176129779836
- type: mrr_at_10
value: 15.333772462248707
- type: mrr_at_100
value: 16.309634922879194
- type: mrr_at_1000
value: 16.39475249150789
- type: mrr_at_20
value: 15.891392914358688
- type: mrr_at_3
value: 13.064889918887577
- type: mrr_at_5
value: 14.311993047508642
- type: nauc_map_at_1000_diff1
value: 19.775928600522615
- type: nauc_map_at_1000_max
value: 6.286282728873767
- type: nauc_map_at_1000_std
value: 10.433091988799701
- type: nauc_map_at_100_diff1
value: 19.76472010726201
- type: nauc_map_at_100_max
value: 6.3000520043276245
- type: nauc_map_at_100_std
value: 10.369742430725108
- type: nauc_map_at_10_diff1
value: 19.717104003612306
- type: nauc_map_at_10_max
value: 5.9416407746652915
- type: nauc_map_at_10_std
value: 9.269462518525886
- type: nauc_map_at_1_diff1
value: 22.577309259900126
- type: nauc_map_at_1_max
value: 4.4722142164380605
- type: nauc_map_at_1_std
value: 3.7899645702785345
- type: nauc_map_at_20_diff1
value: 19.71861462412693
- type: nauc_map_at_20_max
value: 6.104405666589615
- type: nauc_map_at_20_std
value: 9.774250304834347
- type: nauc_map_at_3_diff1
value: 20.745180167104174
- type: nauc_map_at_3_max
value: 4.726336508000744
- type: nauc_map_at_3_std
value: 7.012706580698335
- type: nauc_map_at_5_diff1
value: 20.401667911889596
- type: nauc_map_at_5_max
value: 5.021580992513943
- type: nauc_map_at_5_std
value: 8.232301301005908
- type: nauc_mrr_at_1000_diff1
value: 19.876105574468276
- type: nauc_mrr_at_1000_max
value: 5.92950987632599
- type: nauc_mrr_at_1000_std
value: 10.422385358307675
- type: nauc_mrr_at_100_diff1
value: 19.864601593092164
- type: nauc_mrr_at_100_max
value: 5.937364432461887
- type: nauc_mrr_at_100_std
value: 10.372545373358479
- type: nauc_mrr_at_10_diff1
value: 19.8074129108612
- type: nauc_mrr_at_10_max
value: 5.583608572112338
- type: nauc_mrr_at_10_std
value: 9.660933453553797
- type: nauc_mrr_at_1_diff1
value: 22.771833118893053
- type: nauc_mrr_at_1_max
value: 4.270593166778219
- type: nauc_mrr_at_1_std
value: 4.72067370933128
- type: nauc_mrr_at_20_diff1
value: 19.816299723557
- type: nauc_mrr_at_20_max
value: 5.803282270363233
- type: nauc_mrr_at_20_std
value: 9.982388740482714
- type: nauc_mrr_at_3_diff1
value: 20.764352672106014
- type: nauc_mrr_at_3_max
value: 4.308188794966225
- type: nauc_mrr_at_3_std
value: 7.424575450681196
- type: nauc_mrr_at_5_diff1
value: 20.468124439169884
- type: nauc_mrr_at_5_max
value: 4.717164145352797
- type: nauc_mrr_at_5_std
value: 8.75784949698527
- type: nauc_ndcg_at_1000_diff1
value: 18.988627444499162
- type: nauc_ndcg_at_1000_max
value: 8.336437983015612
- type: nauc_ndcg_at_1000_std
value: 17.785235937443314
- type: nauc_ndcg_at_100_diff1
value: 18.72435211905066
- type: nauc_ndcg_at_100_max
value: 8.509559844610813
- type: nauc_ndcg_at_100_std
value: 16.272027197158785
- type: nauc_ndcg_at_10_diff1
value: 18.50083720860625
- type: nauc_ndcg_at_10_max
value: 6.816989264362351
- type: nauc_ndcg_at_10_std
value: 11.70379688056292
- type: nauc_ndcg_at_1_diff1
value: 23.028151500845926
- type: nauc_ndcg_at_1_max
value: 4.252790790979486
- type: nauc_ndcg_at_1_std
value: 4.919320655470863
- type: nauc_ndcg_at_20_diff1
value: 18.61317480699593
- type: nauc_ndcg_at_20_max
value: 7.400038137531198
- type: nauc_ndcg_at_20_std
value: 12.975329660907905
- type: nauc_ndcg_at_3_diff1
value: 20.331305466487297
- type: nauc_ndcg_at_3_max
value: 4.451813547010051
- type: nauc_ndcg_at_3_std
value: 7.835866814473613
- type: nauc_ndcg_at_5_diff1
value: 19.933475062151903
- type: nauc_ndcg_at_5_max
value: 5.0523614629035
- type: nauc_ndcg_at_5_std
value: 9.763459907678518
- type: nauc_precision_at_1000_diff1
value: 10.24793761705778
- type: nauc_precision_at_1000_max
value: 10.459646580367272
- type: nauc_precision_at_1000_std
value: 35.19560755022326
- type: nauc_precision_at_100_diff1
value: 14.032733274764734
- type: nauc_precision_at_100_max
value: 12.582877921585014
- type: nauc_precision_at_100_std
value: 30.56446230218432
- type: nauc_precision_at_10_diff1
value: 15.46863641183508
- type: nauc_precision_at_10_max
value: 8.026206096826051
- type: nauc_precision_at_10_std
value: 17.580067448009732
- type: nauc_precision_at_1_diff1
value: 23.028151500845926
- type: nauc_precision_at_1_max
value: 4.252790790979486
- type: nauc_precision_at_1_std
value: 4.919320655470863
- type: nauc_precision_at_20_diff1
value: 15.577209585349616
- type: nauc_precision_at_20_max
value: 9.37176988371138
- type: nauc_precision_at_20_std
value: 20.825242862847972
- type: nauc_precision_at_3_diff1
value: 19.697434012748303
- type: nauc_precision_at_3_max
value: 3.817741628018302
- type: nauc_precision_at_3_std
value: 9.855204198464552
- type: nauc_precision_at_5_diff1
value: 18.757352510786994
- type: nauc_precision_at_5_max
value: 4.78932962761337
- type: nauc_precision_at_5_std
value: 13.485110478478058
- type: nauc_recall_at_1000_diff1
value: 16.784291464246394
- type: nauc_recall_at_1000_max
value: 15.357886220356304
- type: nauc_recall_at_1000_std
value: 47.3266711354422
- type: nauc_recall_at_100_diff1
value: 15.651366556591528
- type: nauc_recall_at_100_max
value: 14.108369717831499
- type: nauc_recall_at_100_std
value: 30.26307437972032
- type: nauc_recall_at_10_diff1
value: 15.332913342892315
- type: nauc_recall_at_10_max
value: 8.769293510819189
- type: nauc_recall_at_10_std
value: 15.625436932641975
- type: nauc_recall_at_1_diff1
value: 22.577309259900126
- type: nauc_recall_at_1_max
value: 4.4722142164380605
- type: nauc_recall_at_1_std
value: 3.7899645702785345
- type: nauc_recall_at_20_diff1
value: 15.760837708226655
- type: nauc_recall_at_20_max
value: 10.11729976512556
- type: nauc_recall_at_20_std
value: 18.300935029131725
- type: nauc_recall_at_3_diff1
value: 19.039476605698372
- type: nauc_recall_at_3_max
value: 4.107922037298003
- type: nauc_recall_at_3_std
value: 9.115412171303978
- type: nauc_recall_at_5_diff1
value: 18.363415603635758
- type: nauc_recall_at_5_max
value: 5.241253574533175
- type: nauc_recall_at_5_std
value: 12.124948884672802
- type: ndcg_at_1
value: 9.067
- type: ndcg_at_10
value: 17.682000000000002
- type: ndcg_at_100
value: 22.982
- type: ndcg_at_1000
value: 25.692999999999998
- type: ndcg_at_20
value: 19.747
- type: ndcg_at_3
value: 13.219
- type: ndcg_at_5
value: 15.312999999999999
- type: precision_at_1
value: 9.067
- type: precision_at_10
value: 3.3000000000000003
- type: precision_at_100
value: 0.631
- type: precision_at_1000
value: 0.089
- type: precision_at_20
value: 2.136
- type: precision_at_3
value: 6.228
- type: precision_at_5
value: 4.925
- type: recall_at_1
value: 7.968999999999999
- type: recall_at_10
value: 28.208
- type: recall_at_100
value: 52.776
- type: recall_at_1000
value: 73.571
- type: recall_at_20
value: 35.941
- type: recall_at_3
value: 16.338
- type: recall_at_5
value: 21.217
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval (default)
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 74.323
- type: map_at_1
value: 57.30800000000001
- type: map_at_10
value: 69.32000000000001
- type: map_at_100
value: 70.106
- type: map_at_1000
value: 70.149
- type: map_at_20
value: 69.807
- type: map_at_3
value: 66.418
- type: map_at_5
value: 68.184
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 73.97885714285673
- type: mrr_at_100
value: 74.29274218615109
- type: mrr_at_1000
value: 74.3051429938558
- type: mrr_at_20
value: 74.18544015014858
- type: mrr_at_3
value: 72.26666666666631
- type: mrr_at_5
value: 73.37966666666605
- type: nauc_map_at_1000_diff1
value: 69.18960163699573
- type: nauc_map_at_1000_max
value: 37.38136640005
- type: nauc_map_at_1000_std
value: -2.570923100785111
- type: nauc_map_at_100_diff1
value: 69.18751629878942
- type: nauc_map_at_100_max
value: 37.36952143443813
- type: nauc_map_at_100_std
value: -2.5886077139396027
- type: nauc_map_at_10_diff1
value: 69.09406013156409
- type: nauc_map_at_10_max
value: 36.877436974500775
- type: nauc_map_at_10_std
value: -3.3540620889292203
- type: nauc_map_at_1_diff1
value: 70.93951368121674
- type: nauc_map_at_1_max
value: 32.233487451612305
- type: nauc_map_at_1_std
value: -7.055750788201864
- type: nauc_map_at_20_diff1
value: 69.14097261555858
- type: nauc_map_at_20_max
value: 37.18308654380657
- type: nauc_map_at_20_std
value: -2.912685185426714
- type: nauc_map_at_3_diff1
value: 69.01140661964882
- type: nauc_map_at_3_max
value: 35.56708493366717
- type: nauc_map_at_3_std
value: -5.47958763916843
- type: nauc_map_at_5_diff1
value: 68.97841901572657
- type: nauc_map_at_5_max
value: 36.356674331191265
- type: nauc_map_at_5_std
value: -4.271166648670905
- type: nauc_mrr_at_1000_diff1
value: 70.61597700848178
- type: nauc_mrr_at_1000_max
value: 40.41208966087904
- type: nauc_mrr_at_1000_std
value: -0.15890737609620642
- type: nauc_mrr_at_100_diff1
value: 70.61360632996228
- type: nauc_mrr_at_100_max
value: 40.41568433400612
- type: nauc_mrr_at_100_std
value: -0.1448505595676874
- type: nauc_mrr_at_10_diff1
value: 70.5233993892019
- type: nauc_mrr_at_10_max
value: 40.36230785474746
- type: nauc_mrr_at_10_std
value: -0.22757815568658987
- type: nauc_mrr_at_1_diff1
value: 72.6747651764081
- type: nauc_mrr_at_1_max
value: 40.02178963789037
- type: nauc_mrr_at_1_std
value: -2.575126954097418
- type: nauc_mrr_at_20_diff1
value: 70.58326373490296
- type: nauc_mrr_at_20_max
value: 40.41333734338905
- type: nauc_mrr_at_20_std
value: -0.1345473571856357
- type: nauc_mrr_at_3_diff1
value: 70.37817581234762
- type: nauc_mrr_at_3_max
value: 40.203366387087705
- type: nauc_mrr_at_3_std
value: -1.2261489082901087
- type: nauc_mrr_at_5_diff1
value: 70.45626657672184
- type: nauc_mrr_at_5_max
value: 40.3234615411654
- type: nauc_mrr_at_5_std
value: -0.3805672716488398
- type: nauc_ndcg_at_1000_diff1
value: 69.21984468258341
- type: nauc_ndcg_at_1000_max
value: 39.0253925541956
- type: nauc_ndcg_at_1000_std
value: 0.8160264523775477
- type: nauc_ndcg_at_100_diff1
value: 69.15328478391302
- type: nauc_ndcg_at_100_max
value: 38.96655324359319
- type: nauc_ndcg_at_100_std
value: 1.1256651981311283
- type: nauc_ndcg_at_10_diff1
value: 68.53510190998198
- type: nauc_ndcg_at_10_max
value: 37.91208417950795
- type: nauc_ndcg_at_10_std
value: -0.7377655073302805
- type: nauc_ndcg_at_1_diff1
value: 72.63228601131651
- type: nauc_ndcg_at_1_max
value: 40.16828628757125
- type: nauc_ndcg_at_1_std
value: -2.528909627178983
- type: nauc_ndcg_at_20_diff1
value: 68.822583729052
- type: nauc_ndcg_at_20_max
value: 38.41592366520079
- type: nauc_ndcg_at_20_std
value: 0.06798311113755548
- type: nauc_ndcg_at_3_diff1
value: 68.1481692592636
- type: nauc_ndcg_at_3_max
value: 37.31206796055115
- type: nauc_ndcg_at_3_std
value: -3.254883595992796
- type: nauc_ndcg_at_5_diff1
value: 68.24715917081343
- type: nauc_ndcg_at_5_max
value: 37.56264948769021
- type: nauc_ndcg_at_5_std
value: -1.8709773297999994
- type: nauc_precision_at_1000_diff1
value: -27.810948267157137
- type: nauc_precision_at_1000_max
value: -0.24668486328059996
- type: nauc_precision_at_1000_std
value: 20.580820056804715
- type: nauc_precision_at_100_diff1
value: -22.061161829256797
- type: nauc_precision_at_100_max
value: 4.679165403717356
- type: nauc_precision_at_100_std
value: 21.989059211475855
- type: nauc_precision_at_10_diff1
value: -3.9320543024872556
- type: nauc_precision_at_10_max
value: 14.010070678201766
- type: nauc_precision_at_10_std
value: 16.669492507338155
- type: nauc_precision_at_1_diff1
value: 72.63228601131651
- type: nauc_precision_at_1_max
value: 40.16828628757125
- type: nauc_precision_at_1_std
value: -2.528909627178983
- type: nauc_precision_at_20_diff1
value: -12.164765481707331
- type: nauc_precision_at_20_max
value: 10.511899418907312
- type: nauc_precision_at_20_std
value: 19.320026937145183
- type: nauc_precision_at_3_diff1
value: 22.621554858906986
- type: nauc_precision_at_3_max
value: 24.326914902507287
- type: nauc_precision_at_3_std
value: 6.099411862597304
- type: nauc_precision_at_5_diff1
value: 8.981227790660293
- type: nauc_precision_at_5_max
value: 19.916827592062745
- type: nauc_precision_at_5_std
value: 11.93677912655441
- type: nauc_recall_at_1000_diff1
value: 60.79128240819883
- type: nauc_recall_at_1000_max
value: 44.80906309211301
- type: nauc_recall_at_1000_std
value: 56.54768589270181
- type: nauc_recall_at_100_diff1
value: 61.18835279218082
- type: nauc_recall_at_100_max
value: 39.61329094249297
- type: nauc_recall_at_100_std
value: 31.736658564346342
- type: nauc_recall_at_10_diff1
value: 61.3639032751697
- type: nauc_recall_at_10_max
value: 34.510711243051375
- type: nauc_recall_at_10_std
value: 4.855117542870995
- type: nauc_recall_at_1_diff1
value: 70.93951368121674
- type: nauc_recall_at_1_max
value: 32.233487451612305
- type: nauc_recall_at_1_std
value: -7.055750788201864
- type: nauc_recall_at_20_diff1
value: 61.27124485304799
- type: nauc_recall_at_20_max
value: 36.11805010411244
- type: nauc_recall_at_20_std
value: 11.38763207684191
- type: nauc_recall_at_3_diff1
value: 63.91101210841338
- type: nauc_recall_at_3_max
value: 33.23862328274836
- type: nauc_recall_at_3_std
value: -4.857791490570391
- type: nauc_recall_at_5_diff1
value: 62.37552817951354
- type: nauc_recall_at_5_max
value: 33.86753069930419
- type: nauc_recall_at_5_std
value: -0.4857746420435554
- type: ndcg_at_1
value: 66.02
- type: ndcg_at_10
value: 74.323
- type: ndcg_at_100
value: 76.806
- type: ndcg_at_1000
value: 77.436
- type: ndcg_at_20
value: 75.47500000000001
- type: ndcg_at_3
value: 70.44500000000001
- type: ndcg_at_5
value: 72.48
- type: precision_at_1
value: 66.02
- type: precision_at_10
value: 11.273
- type: precision_at_100
value: 1.373
- type: precision_at_1000
value: 0.149
- type: precision_at_20
value: 6.101
- type: precision_at_3
value: 30.5
- type: precision_at_5
value: 20.31
- type: recall_at_1
value: 57.30800000000001
- type: recall_at_10
value: 84.152
- type: recall_at_100
value: 93.989
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_20
value: 88.138
- type: recall_at_3
value: 73.137
- type: recall_at_5
value: 78.655
- task:
type: Clustering
dataset:
name: MTEB RedditClustering (default)
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 28.89014544508522
- type: v_measure
value: 28.89014544508522
- type: v_measure_std
value: 4.477854992673074
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P (default)
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 41.588064041506414
- type: v_measure
value: 41.588064041506414
- type: v_measure_std
value: 12.234957713539355
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS (default)
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 9.923
- type: map_at_1
value: 2.15
- type: map_at_10
value: 5.379
- type: map_at_100
value: 6.487
- type: map_at_1000
value: 6.726999999999999
- type: map_at_20
value: 5.845000000000001
- type: map_at_3
value: 3.943
- type: map_at_5
value: 4.642
- type: mrr_at_1
value: 10.6
- type: mrr_at_10
value: 17.65234126984126
- type: mrr_at_100
value: 18.72231260720679
- type: mrr_at_1000
value: 18.83457574677834
- type: mrr_at_20
value: 18.178004510968904
- type: mrr_at_3
value: 14.96666666666667
- type: mrr_at_5
value: 16.426666666666666
- type: nauc_map_at_1000_diff1
value: 11.904585832905996
- type: nauc_map_at_1000_max
value: 13.966912689458244
- type: nauc_map_at_1000_std
value: 14.274562318051975
- type: nauc_map_at_100_diff1
value: 11.914962635425084
- type: nauc_map_at_100_max
value: 13.792005445505046
- type: nauc_map_at_100_std
value: 13.688572560422358
- type: nauc_map_at_10_diff1
value: 12.924485348386265
- type: nauc_map_at_10_max
value: 12.924904365030008
- type: nauc_map_at_10_std
value: 11.028226417787405
- type: nauc_map_at_1_diff1
value: 17.278503151293908
- type: nauc_map_at_1_max
value: 7.878679954463645
- type: nauc_map_at_1_std
value: 5.787632681875146
- type: nauc_map_at_20_diff1
value: 12.361611976516448
- type: nauc_map_at_20_max
value: 13.430602876791497
- type: nauc_map_at_20_std
value: 11.626342360129135
- type: nauc_map_at_3_diff1
value: 13.25103680109857
- type: nauc_map_at_3_max
value: 11.851782553996365
- type: nauc_map_at_3_std
value: 7.429469629304992
- type: nauc_map_at_5_diff1
value: 13.800025735259355
- type: nauc_map_at_5_max
value: 12.565449305066048
- type: nauc_map_at_5_std
value: 9.75302950224773
- type: nauc_mrr_at_1000_diff1
value: 12.268595456055587
- type: nauc_mrr_at_1000_max
value: 9.25353359860505
- type: nauc_mrr_at_1000_std
value: 9.108487924061626
- type: nauc_mrr_at_100_diff1
value: 12.221030310338321
- type: nauc_mrr_at_100_max
value: 9.25521408834954
- type: nauc_mrr_at_100_std
value: 9.138330201368367
- type: nauc_mrr_at_10_diff1
value: 12.574921954053705
- type: nauc_mrr_at_10_max
value: 9.022771164246922
- type: nauc_mrr_at_10_std
value: 8.72904050693386
- type: nauc_mrr_at_1_diff1
value: 17.46158729503331
- type: nauc_mrr_at_1_max
value: 7.638928315208697
- type: nauc_mrr_at_1_std
value: 6.095710473752395
- type: nauc_mrr_at_20_diff1
value: 12.138920051010647
- type: nauc_mrr_at_20_max
value: 9.276258507402064
- type: nauc_mrr_at_20_std
value: 8.886687014526801
- type: nauc_mrr_at_3_diff1
value: 14.193338999133834
- type: nauc_mrr_at_3_max
value: 8.299120353947483
- type: nauc_mrr_at_3_std
value: 7.8035097667232005
- type: nauc_mrr_at_5_diff1
value: 13.111703855187907
- type: nauc_mrr_at_5_max
value: 9.120679964295672
- type: nauc_mrr_at_5_std
value: 8.32132668626495
- type: nauc_ndcg_at_1000_diff1
value: 8.86999972791066
- type: nauc_ndcg_at_1000_max
value: 15.310859480575436
- type: nauc_ndcg_at_1000_std
value: 21.250542726021116
- type: nauc_ndcg_at_100_diff1
value: 8.721788996698756
- type: nauc_ndcg_at_100_max
value: 13.753927264089416
- type: nauc_ndcg_at_100_std
value: 17.83014109593192
- type: nauc_ndcg_at_10_diff1
value: 10.851214040795984
- type: nauc_ndcg_at_10_max
value: 11.754038261909226
- type: nauc_ndcg_at_10_std
value: 11.732493442071242
- type: nauc_ndcg_at_1_diff1
value: 17.46158729503331
- type: nauc_ndcg_at_1_max
value: 7.638928315208697
- type: nauc_ndcg_at_1_std
value: 6.095710473752395
- type: nauc_ndcg_at_20_diff1
value: 9.76180043441647
- type: nauc_ndcg_at_20_max
value: 12.820709997321758
- type: nauc_ndcg_at_20_std
value: 12.721916889128632
- type: nauc_ndcg_at_3_diff1
value: 12.839313795789275
- type: nauc_ndcg_at_3_max
value: 10.610706825785767
- type: nauc_ndcg_at_3_std
value: 8.204558555180421
- type: nauc_ndcg_at_5_diff1
value: 12.406813811698386
- type: nauc_ndcg_at_5_max
value: 11.878799458897053
- type: nauc_ndcg_at_5_std
value: 10.186784386212949
- type: nauc_precision_at_1000_diff1
value: 2.8398170540614176
- type: nauc_precision_at_1000_max
value: 16.99931587707156
- type: nauc_precision_at_1000_std
value: 31.86724716316765
- type: nauc_precision_at_100_diff1
value: 3.4160417262207297
- type: nauc_precision_at_100_max
value: 14.437629378775577
- type: nauc_precision_at_100_std
value: 24.60677482735814
- type: nauc_precision_at_10_diff1
value: 7.433603751797789
- type: nauc_precision_at_10_max
value: 12.127707014834115
- type: nauc_precision_at_10_std
value: 14.347141705378737
- type: nauc_precision_at_1_diff1
value: 17.46158729503331
- type: nauc_precision_at_1_max
value: 7.638928315208697
- type: nauc_precision_at_1_std
value: 6.095710473752395
- type: nauc_precision_at_20_diff1
value: 5.555321803900292
- type: nauc_precision_at_20_max
value: 13.975730968140612
- type: nauc_precision_at_20_std
value: 15.701599582613069
- type: nauc_precision_at_3_diff1
value: 10.570021043882896
- type: nauc_precision_at_3_max
value: 11.640698048065092
- type: nauc_precision_at_3_std
value: 8.880832670930209
- type: nauc_precision_at_5_diff1
value: 10.192070602011636
- type: nauc_precision_at_5_max
value: 12.979688593338693
- type: nauc_precision_at_5_std
value: 12.116013499683467
- type: nauc_recall_at_1000_diff1
value: 2.883533640208864
- type: nauc_recall_at_1000_max
value: 18.09724738913881
- type: nauc_recall_at_1000_std
value: 32.15747757955521
- type: nauc_recall_at_100_diff1
value: 3.6040687535563998
- type: nauc_recall_at_100_max
value: 14.732664182141772
- type: nauc_recall_at_100_std
value: 24.427986607748
- type: nauc_recall_at_10_diff1
value: 7.587316953732061
- type: nauc_recall_at_10_max
value: 12.334929718954289
- type: nauc_recall_at_10_std
value: 14.094286673978088
- type: nauc_recall_at_1_diff1
value: 17.278503151293908
- type: nauc_recall_at_1_max
value: 7.878679954463645
- type: nauc_recall_at_1_std
value: 5.787632681875146
- type: nauc_recall_at_20_diff1
value: 5.706170516654628
- type: nauc_recall_at_20_max
value: 14.095625029855203
- type: nauc_recall_at_20_std
value: 15.241931131705527
- type: nauc_recall_at_3_diff1
value: 10.574961375800127
- type: nauc_recall_at_3_max
value: 11.733105660119586
- type: nauc_recall_at_3_std
value: 8.540340847563677
- type: nauc_recall_at_5_diff1
value: 10.158076693596577
- type: nauc_recall_at_5_max
value: 13.152816873926534
- type: nauc_recall_at_5_std
value: 11.843127888328391
- type: ndcg_at_1
value: 10.6
- type: ndcg_at_10
value: 9.923
- type: ndcg_at_100
value: 15.463
- type: ndcg_at_1000
value: 20.673
- type: ndcg_at_20
value: 11.468
- type: ndcg_at_3
value: 9.120000000000001
- type: ndcg_at_5
value: 8.08
- type: precision_at_1
value: 10.6
- type: precision_at_10
value: 5.319999999999999
- type: precision_at_100
value: 1.357
- type: precision_at_1000
value: 0.262
- type: precision_at_20
value: 3.56
- type: precision_at_3
value: 8.733
- type: precision_at_5
value: 7.3
- type: recall_at_1
value: 2.15
- type: recall_at_10
value: 10.745000000000001
- type: recall_at_100
value: 27.478
- type: recall_at_1000
value: 53.067
- type: recall_at_20
value: 14.432
- type: recall_at_3
value: 5.295
- type: recall_at_5
value: 7.37
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 75.0950047498747
- type: cosine_spearman
value: 66.17240782538595
- type: euclidean_pearson
value: 67.00770252295281
- type: euclidean_spearman
value: 60.910363132843514
- type: main_score
value: 66.17240782538595
- type: manhattan_pearson
value: 67.05219198532856
- type: manhattan_spearman
value: 61.09670227979067
- type: pearson
value: 75.0950047498747
- type: spearman
value: 66.17240782538595
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 70.27191745166907
- type: cosine_spearman
value: 61.89139464648924
- type: euclidean_pearson
value: 54.34524146536028
- type: euclidean_spearman
value: 50.72726514543895
- type: main_score
value: 61.89139464648924
- type: manhattan_pearson
value: 54.0517351204108
- type: manhattan_spearman
value: 50.62237885284486
- type: pearson
value: 70.27191745166907
- type: spearman
value: 61.89139464648924
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 70.19582039979868
- type: cosine_spearman
value: 71.66792475528088
- type: euclidean_pearson
value: 55.582203822685486
- type: euclidean_spearman
value: 56.20322977297382
- type: main_score
value: 71.66792475528088
- type: manhattan_pearson
value: 55.95799094895162
- type: manhattan_spearman
value: 56.588522991206325
- type: pearson
value: 70.19582039979868
- type: spearman
value: 71.66792475528088
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 69.52140108419252
- type: cosine_spearman
value: 67.82634222687376
- type: euclidean_pearson
value: 56.45640217254015
- type: euclidean_spearman
value: 56.232462674683994
- type: main_score
value: 67.82634222687376
- type: manhattan_pearson
value: 56.71095067060834
- type: manhattan_spearman
value: 56.419654300835596
- type: pearson
value: 69.52140108419252
- type: spearman
value: 67.82634222687376
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 73.66221619412464
- type: cosine_spearman
value: 75.48765072240437
- type: euclidean_pearson
value: 56.971989853952046
- type: euclidean_spearman
value: 59.57242983168428
- type: main_score
value: 75.48765072240437
- type: manhattan_pearson
value: 57.292670731862025
- type: manhattan_spearman
value: 59.64547291104911
- type: pearson
value: 73.66221619412464
- type: spearman
value: 75.48765072240437
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 62.328630460915925
- type: cosine_spearman
value: 66.48155706668948
- type: euclidean_pearson
value: 48.85087938485013
- type: euclidean_spearman
value: 51.58756922385477
- type: main_score
value: 66.48155706668948
- type: manhattan_pearson
value: 49.02650798849104
- type: manhattan_spearman
value: 51.597849334470936
- type: pearson
value: 62.328630460915925
- type: spearman
value: 66.48155706668948
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 21.344883409729785
- type: cosine_spearman
value: 19.492480027372526
- type: euclidean_pearson
value: -8.605176891549817
- type: euclidean_spearman
value: -7.528098935541785
- type: main_score
value: 19.492480027372526
- type: manhattan_pearson
value: -10.120526712428015
- type: manhattan_spearman
value: -8.968202174485103
- type: pearson
value: 21.344883409729785
- type: spearman
value: 19.492480027372526
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 14.966581838953037
- type: cosine_spearman
value: 13.24509138766898
- type: euclidean_pearson
value: -6.690226814122847
- type: euclidean_spearman
value: -11.282875560023765
- type: main_score
value: 13.24509138766898
- type: manhattan_pearson
value: -7.476797502897139
- type: manhattan_spearman
value: -11.92841312081328
- type: pearson
value: 14.966581838953037
- type: spearman
value: 13.24509138766898
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 18.309414985775234
- type: cosine_spearman
value: 14.341489363671842
- type: euclidean_pearson
value: -12.122888971186411
- type: euclidean_spearman
value: -16.469354911796607
- type: main_score
value: 14.341489363671842
- type: manhattan_pearson
value: -10.903411096507561
- type: manhattan_spearman
value: -13.076094357191614
- type: pearson
value: 18.309414985775234
- type: spearman
value: 14.341489363671842
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 21.301586456013037
- type: cosine_spearman
value: 22.571419522164376
- type: euclidean_pearson
value: -6.367176828477704
- type: euclidean_spearman
value: -9.877915052256634
- type: main_score
value: 22.571419522164376
- type: manhattan_pearson
value: -4.676449796672262
- type: manhattan_spearman
value: -7.3330561255268805
- type: pearson
value: 21.301586456013037
- type: spearman
value: 22.571419522164376
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 16.140292893693204
- type: cosine_spearman
value: 10.216376215477217
- type: euclidean_pearson
value: -15.27866395332899
- type: euclidean_spearman
value: -14.09405330374556
- type: main_score
value: 10.216376215477217
- type: manhattan_pearson
value: -14.968016143069224
- type: manhattan_spearman
value: -12.871979788571364
- type: pearson
value: 16.140292893693204
- type: spearman
value: 10.216376215477217
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 78.42242639560023
- type: cosine_spearman
value: 80.2472005970173
- type: euclidean_pearson
value: 66.28797094299918
- type: euclidean_spearman
value: 67.13581863643712
- type: main_score
value: 80.2472005970173
- type: manhattan_pearson
value: 66.02431023839748
- type: manhattan_spearman
value: 67.15538442088678
- type: pearson
value: 78.42242639560023
- type: spearman
value: 80.2472005970173
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: -5.762967943082491
- type: cosine_spearman
value: -6.184248227377756
- type: euclidean_pearson
value: -12.170911062337659
- type: euclidean_spearman
value: -9.846378276134612
- type: main_score
value: -6.184248227377756
- type: manhattan_pearson
value: -13.126030597269658
- type: manhattan_spearman
value: -11.320163726484019
- type: pearson
value: -5.762967943082491
- type: spearman
value: -6.184248227377756
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: -8.666319610669559
- type: cosine_spearman
value: -10.0877070299522
- type: euclidean_pearson
value: -21.16722886445997
- type: euclidean_spearman
value: -25.725365743898504
- type: main_score
value: -10.0877070299522
- type: manhattan_pearson
value: -22.03289222804741
- type: manhattan_spearman
value: -26.785390252425533
- type: pearson
value: -8.666319610669559
- type: spearman
value: -10.0877070299522
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 16.880423266497427
- type: cosine_spearman
value: 18.497107178067477
- type: euclidean_pearson
value: 14.33062698609246
- type: euclidean_spearman
value: 16.623349996837863
- type: main_score
value: 18.497107178067477
- type: manhattan_pearson
value: 21.024602299309286
- type: manhattan_spearman
value: 24.281840448539402
- type: pearson
value: 16.880423266497427
- type: spearman
value: 18.497107178067477
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 44.98861387948161
- type: cosine_spearman
value: 59.04270974068145
- type: euclidean_pearson
value: 49.574894395857484
- type: euclidean_spearman
value: 58.827686687567805
- type: main_score
value: 59.04270974068145
- type: manhattan_pearson
value: 48.65094961023066
- type: manhattan_spearman
value: 58.3204048215355
- type: pearson
value: 44.98861387948161
- type: spearman
value: 59.04270974068145
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 26.505168004689462
- type: cosine_spearman
value: 28.591720613248732
- type: euclidean_pearson
value: 24.74526273753091
- type: euclidean_spearman
value: 28.416241187559642
- type: main_score
value: 28.591720613248732
- type: manhattan_pearson
value: 23.527990703124505
- type: manhattan_spearman
value: 33.434031878984136
- type: pearson
value: 26.505168004689462
- type: spearman
value: 28.591720613248732
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 11.552622364692777
- type: cosine_spearman
value: 10.973019756392695
- type: euclidean_pearson
value: 2.373117729670719
- type: euclidean_spearman
value: 1.961823192174414
- type: main_score
value: 10.973019756392695
- type: manhattan_pearson
value: 2.4552310228655108
- type: manhattan_spearman
value: 2.9778196586898273
- type: pearson
value: 11.552622364692777
- type: spearman
value: 10.973019756392695
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 10.466988163502029
- type: cosine_spearman
value: -0.21879166839686814
- type: euclidean_pearson
value: 22.096342233944544
- type: euclidean_spearman
value: 3.010990103175947
- type: main_score
value: -0.21879166839686814
- type: manhattan_pearson
value: 27.847325418935775
- type: manhattan_spearman
value: 4.74569547403683
- type: pearson
value: 10.466988163502029
- type: spearman
value: -0.21879166839686814
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 66.80057012864974
- type: cosine_spearman
value: 66.52235871936412
- type: euclidean_pearson
value: 55.372109895942536
- type: euclidean_spearman
value: 56.04078716357898
- type: main_score
value: 66.52235871936412
- type: manhattan_pearson
value: 55.58797025494765
- type: manhattan_spearman
value: 56.179959581772266
- type: pearson
value: 66.80057012864974
- type: spearman
value: 66.52235871936412
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR (default)
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 71.11074203128574
- type: map
value: 71.11074203128574
- type: mrr
value: 89.77809499868323
- type: nAUC_map_diff1
value: 11.228330835325687
- type: nAUC_map_max
value: 54.45812469406701
- type: nAUC_map_std
value: 63.051723849534525
- type: nAUC_mrr_diff1
value: 47.94323704040123
- type: nAUC_mrr_max
value: 72.52180244204617
- type: nAUC_mrr_std
value: 64.6185657337566
- task:
type: Retrieval
dataset:
name: MTEB SciFact (default)
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 50.663000000000004
- type: map_at_1
value: 34.9
- type: map_at_10
value: 45.591
- type: map_at_100
value: 46.478
- type: map_at_1000
value: 46.544000000000004
- type: map_at_20
value: 45.999
- type: map_at_3
value: 43.354
- type: map_at_5
value: 44.733000000000004
- type: mrr_at_1
value: 37.0
- type: mrr_at_10
value: 47.36547619047619
- type: mrr_at_100
value: 48.09705728333796
- type: mrr_at_1000
value: 48.152949244883104
- type: mrr_at_20
value: 47.69512736718619
- type: mrr_at_3
value: 45.388888888888886
- type: mrr_at_5
value: 46.605555555555554
- type: nauc_map_at_1000_diff1
value: 52.100145151741394
- type: nauc_map_at_1000_max
value: 27.410237212009648
- type: nauc_map_at_1000_std
value: 2.9904718168509814
- type: nauc_map_at_100_diff1
value: 52.078009501467115
- type: nauc_map_at_100_max
value: 27.388902536377337
- type: nauc_map_at_100_std
value: 2.9956426758632553
- type: nauc_map_at_10_diff1
value: 52.22446655004901
- type: nauc_map_at_10_max
value: 27.537880755428052
- type: nauc_map_at_10_std
value: 2.5329635707923672
- type: nauc_map_at_1_diff1
value: 56.87947977552147
- type: nauc_map_at_1_max
value: 26.992163127256497
- type: nauc_map_at_1_std
value: -0.9440039327267877
- type: nauc_map_at_20_diff1
value: 52.106371246476826
- type: nauc_map_at_20_max
value: 27.32862929056924
- type: nauc_map_at_20_std
value: 2.7349113689801996
- type: nauc_map_at_3_diff1
value: 53.35317860724047
- type: nauc_map_at_3_max
value: 26.25510463708658
- type: nauc_map_at_3_std
value: 2.289593280073433
- type: nauc_map_at_5_diff1
value: 51.678047431193974
- type: nauc_map_at_5_max
value: 27.418395689002818
- type: nauc_map_at_5_std
value: 2.1245361198440267
- type: nauc_mrr_at_1000_diff1
value: 49.98301669091194
- type: nauc_mrr_at_1000_max
value: 29.333209267321198
- type: nauc_mrr_at_1000_std
value: 5.252782451549811
- type: nauc_mrr_at_100_diff1
value: 49.967980336744034
- type: nauc_mrr_at_100_max
value: 29.331397088810657
- type: nauc_mrr_at_100_std
value: 5.261178047875302
- type: nauc_mrr_at_10_diff1
value: 50.02865512004594
- type: nauc_mrr_at_10_max
value: 29.665247088988096
- type: nauc_mrr_at_10_std
value: 5.105677188444364
- type: nauc_mrr_at_1_diff1
value: 55.219664224743944
- type: nauc_mrr_at_1_max
value: 29.369235255966586
- type: nauc_mrr_at_1_std
value: 1.294523738013475
- type: nauc_mrr_at_20_diff1
value: 49.98301552378738
- type: nauc_mrr_at_20_max
value: 29.388470718856922
- type: nauc_mrr_at_20_std
value: 5.178678395201041
- type: nauc_mrr_at_3_diff1
value: 51.00229122885918
- type: nauc_mrr_at_3_max
value: 28.064602643242907
- type: nauc_mrr_at_3_std
value: 4.744718855685464
- type: nauc_mrr_at_5_diff1
value: 49.20787956974137
- type: nauc_mrr_at_5_max
value: 29.663856377950655
- type: nauc_mrr_at_5_std
value: 4.889452630825029
- type: nauc_ndcg_at_1000_diff1
value: 50.26524611758448
- type: nauc_ndcg_at_1000_max
value: 28.816092638532105
- type: nauc_ndcg_at_1000_std
value: 5.777693934805941
- type: nauc_ndcg_at_100_diff1
value: 49.810321964883876
- type: nauc_ndcg_at_100_max
value: 28.85200497094049
- type: nauc_ndcg_at_100_std
value: 6.4161665223690445
- type: nauc_ndcg_at_10_diff1
value: 50.31987402674788
- type: nauc_ndcg_at_10_max
value: 29.1957589259604
- type: nauc_ndcg_at_10_std
value: 4.249172262339034
- type: nauc_ndcg_at_1_diff1
value: 55.219664224743944
- type: nauc_ndcg_at_1_max
value: 29.369235255966586
- type: nauc_ndcg_at_1_std
value: 1.294523738013475
- type: nauc_ndcg_at_20_diff1
value: 49.95117201846568
- type: nauc_ndcg_at_20_max
value: 28.252381258706883
- type: nauc_ndcg_at_20_std
value: 4.799900939787535
- type: nauc_ndcg_at_3_diff1
value: 51.81554260088138
- type: nauc_ndcg_at_3_max
value: 27.121304990834222
- type: nauc_ndcg_at_3_std
value: 3.720528057690934
- type: nauc_ndcg_at_5_diff1
value: 48.77973374919412
- type: nauc_ndcg_at_5_max
value: 29.131535344710002
- type: nauc_ndcg_at_5_std
value: 3.565095958368389
- type: nauc_precision_at_1000_diff1
value: -7.462742973759457
- type: nauc_precision_at_1000_max
value: 21.45790554414784
- type: nauc_precision_at_1000_std
value: 24.38429850971904
- type: nauc_precision_at_100_diff1
value: 10.210409634704046
- type: nauc_precision_at_100_max
value: 27.700772933352024
- type: nauc_precision_at_100_std
value: 27.80962272064547
- type: nauc_precision_at_10_diff1
value: 34.576585797430766
- type: nauc_precision_at_10_max
value: 33.364848337655786
- type: nauc_precision_at_10_std
value: 14.448906660652794
- type: nauc_precision_at_1_diff1
value: 55.219664224743944
- type: nauc_precision_at_1_max
value: 29.369235255966586
- type: nauc_precision_at_1_std
value: 1.294523738013475
- type: nauc_precision_at_20_diff1
value: 28.759871255957847
- type: nauc_precision_at_20_max
value: 28.756353659179982
- type: nauc_precision_at_20_std
value: 17.539177234113616
- type: nauc_precision_at_3_diff1
value: 44.99876896761731
- type: nauc_precision_at_3_max
value: 28.597098219106442
- type: nauc_precision_at_3_std
value: 9.21762492818973
- type: nauc_precision_at_5_diff1
value: 34.186850914452485
- type: nauc_precision_at_5_max
value: 33.954540973558686
- type: nauc_precision_at_5_std
value: 10.546528423678431
- type: nauc_recall_at_1000_diff1
value: 23.83001981280335
- type: nauc_recall_at_1000_max
value: 43.846644348796225
- type: nauc_recall_at_1000_std
value: 60.408553665368835
- type: nauc_recall_at_100_diff1
value: 38.4746907480832
- type: nauc_recall_at_100_max
value: 33.882306484150135
- type: nauc_recall_at_100_std
value: 27.750836673176565
- type: nauc_recall_at_10_diff1
value: 44.98978983013661
- type: nauc_recall_at_10_max
value: 31.241708340662296
- type: nauc_recall_at_10_std
value: 6.026684637828198
- type: nauc_recall_at_1_diff1
value: 56.87947977552147
- type: nauc_recall_at_1_max
value: 26.992163127256497
- type: nauc_recall_at_1_std
value: -0.9440039327267877
- type: nauc_recall_at_20_diff1
value: 43.253384002784074
- type: nauc_recall_at_20_max
value: 26.89815696422301
- type: nauc_recall_at_20_std
value: 8.446980210355042
- type: nauc_recall_at_3_diff1
value: 48.89792955260931
- type: nauc_recall_at_3_max
value: 26.765492965973237
- type: nauc_recall_at_3_std
value: 5.600856860068723
- type: nauc_recall_at_5_diff1
value: 40.79334879234603
- type: nauc_recall_at_5_max
value: 31.676509416439163
- type: nauc_recall_at_5_std
value: 4.7055724522242
- type: ndcg_at_1
value: 37.0
- type: ndcg_at_10
value: 50.663000000000004
- type: ndcg_at_100
value: 55.022999999999996
- type: ndcg_at_1000
value: 56.643
- type: ndcg_at_20
value: 52.001
- type: ndcg_at_3
value: 46.424
- type: ndcg_at_5
value: 48.653999999999996
- type: precision_at_1
value: 37.0
- type: precision_at_10
value: 7.133000000000001
- type: precision_at_100
value: 0.9570000000000001
- type: precision_at_1000
value: 0.11
- type: precision_at_20
value: 3.8670000000000004
- type: precision_at_3
value: 19.0
- type: precision_at_5
value: 12.733
- type: recall_at_1
value: 34.9
- type: recall_at_10
value: 64.372
- type: recall_at_100
value: 84.806
- type: recall_at_1000
value: 97.26700000000001
- type: recall_at_20
value: 69.428
- type: recall_at_3
value: 52.983000000000004
- type: recall_at_5
value: 58.428000000000004
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions (default)
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.6029702970297
- type: cosine_accuracy_threshold
value: 78.96339297294617
- type: cosine_ap
value: 85.09945680365945
- type: cosine_f1
value: 79.00249376558605
- type: cosine_f1_threshold
value: 77.54697799682617
- type: cosine_precision
value: 78.80597014925374
- type: cosine_recall
value: 79.2
- type: dot_accuracy
value: 99.07128712871287
- type: dot_accuracy_threshold
value: 113537.78076171875
- type: dot_ap
value: 32.974014883183614
- type: dot_f1
value: 38.70665417057169
- type: dot_f1_threshold
value: 82395.60546875
- type: dot_precision
value: 36.41975308641975
- type: dot_recall
value: 41.3
- type: euclidean_accuracy
value: 99.35742574257425
- type: euclidean_accuracy_threshold
value: 1716.6461944580078
- type: euclidean_ap
value: 60.79241641393818
- type: euclidean_f1
value: 61.254199328107504
- type: euclidean_f1_threshold
value: 1787.368392944336
- type: euclidean_precision
value: 69.59287531806616
- type: euclidean_recall
value: 54.7
- type: main_score
value: 85.09945680365945
- type: manhattan_accuracy
value: 99.35544554455446
- type: manhattan_accuracy_threshold
value: 21216.224670410156
- type: manhattan_ap
value: 60.67247165482485
- type: manhattan_f1
value: 61.16876024030584
- type: manhattan_f1_threshold
value: 22668.411254882812
- type: manhattan_precision
value: 67.38868832731649
- type: manhattan_recall
value: 56.00000000000001
- type: max_accuracy
value: 99.6029702970297
- type: max_ap
value: 85.09945680365945
- type: max_f1
value: 79.00249376558605
- type: max_precision
value: 78.80597014925374
- type: max_recall
value: 79.2
- type: similarity_accuracy
value: 99.6029702970297
- type: similarity_accuracy_threshold
value: 78.96339297294617
- type: similarity_ap
value: 85.09945680365945
- type: similarity_f1
value: 79.00249376558605
- type: similarity_f1_threshold
value: 77.54697799682617
- type: similarity_precision
value: 78.80597014925374
- type: similarity_recall
value: 79.2
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering (default)
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 40.01875953666112
- type: v_measure
value: 40.01875953666112
- type: v_measure_std
value: 4.519991014119391
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P (default)
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 28.81354037080584
- type: v_measure
value: 28.81354037080584
- type: v_measure_std
value: 1.4144350664362755
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions (default)
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 44.09716409649705
- type: map
value: 44.09716409649705
- type: mrr
value: 44.662380103556565
- type: nAUC_map_diff1
value: 35.29255607823797
- type: nAUC_map_max
value: 16.421837723462147
- type: nAUC_map_std
value: 6.1302069782322315
- type: nAUC_mrr_diff1
value: 34.559928528154806
- type: nAUC_mrr_max
value: 17.207604918830953
- type: nAUC_mrr_std
value: 6.664790258906265
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.294245469087553
- type: cosine_spearman
value: 30.080488918284974
- type: dot_pearson
value: 18.322393003009722
- type: dot_spearman
value: 20.941469677129597
- type: main_score
value: 30.080488918284974
- type: pearson
value: 29.294245469087553
- type: spearman
value: 30.080488918284974
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID (default)
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 39.983999999999995
- type: map_at_1
value: 0.106
- type: map_at_10
value: 0.644
- type: map_at_100
value: 3.021
- type: map_at_1000
value: 7.86
- type: map_at_20
value: 1.0959999999999999
- type: map_at_3
value: 0.26
- type: map_at_5
value: 0.383
- type: mrr_at_1
value: 52.0
- type: mrr_at_10
value: 63.62142857142856
- type: mrr_at_100
value: 64.14120879120878
- type: mrr_at_1000
value: 64.15196147938082
- type: mrr_at_20
value: 64.06428571428572
- type: mrr_at_3
value: 60.33333333333333
- type: mrr_at_5
value: 62.133333333333326
- type: nauc_map_at_1000_diff1
value: 24.416863084123577
- type: nauc_map_at_1000_max
value: 38.56500518410879
- type: nauc_map_at_1000_std
value: 57.28416632982124
- type: nauc_map_at_100_diff1
value: 7.320029678013508
- type: nauc_map_at_100_max
value: 31.67441200824679
- type: nauc_map_at_100_std
value: 46.99676723594155
- type: nauc_map_at_10_diff1
value: 2.1592330331050635
- type: nauc_map_at_10_max
value: 26.48308930412215
- type: nauc_map_at_10_std
value: 32.1215432254444
- type: nauc_map_at_1_diff1
value: 19.602070971946954
- type: nauc_map_at_1_max
value: 8.20575258643758
- type: nauc_map_at_1_std
value: 17.150126202821102
- type: nauc_map_at_20_diff1
value: 1.4525678948841099
- type: nauc_map_at_20_max
value: 25.398372034894923
- type: nauc_map_at_20_std
value: 37.98656048425611
- type: nauc_map_at_3_diff1
value: 14.189476148666769
- type: nauc_map_at_3_max
value: 13.645814074115348
- type: nauc_map_at_3_std
value: 24.193562926020505
- type: nauc_map_at_5_diff1
value: 6.385516140164152
- type: nauc_map_at_5_max
value: 19.028014747196977
- type: nauc_map_at_5_std
value: 27.2670171970273
- type: nauc_mrr_at_1000_diff1
value: 29.927939844415192
- type: nauc_mrr_at_1000_max
value: 19.139062731303653
- type: nauc_mrr_at_1000_std
value: 30.750244889158466
- type: nauc_mrr_at_100_diff1
value: 29.955577537768708
- type: nauc_mrr_at_100_max
value: 19.15999969363906
- type: nauc_mrr_at_100_std
value: 30.777558250465532
- type: nauc_mrr_at_10_diff1
value: 29.75190425697829
- type: nauc_mrr_at_10_max
value: 19.247901214296146
- type: nauc_mrr_at_10_std
value: 30.12495769940457
- type: nauc_mrr_at_1_diff1
value: 25.319658305674935
- type: nauc_mrr_at_1_max
value: 19.408020022852174
- type: nauc_mrr_at_1_std
value: 30.518526579248036
- type: nauc_mrr_at_20_diff1
value: 29.381724804135523
- type: nauc_mrr_at_20_max
value: 18.78203200071421
- type: nauc_mrr_at_20_std
value: 30.201392736164536
- type: nauc_mrr_at_3_diff1
value: 33.49197973287976
- type: nauc_mrr_at_3_max
value: 16.821299944157854
- type: nauc_mrr_at_3_std
value: 32.95866142740776
- type: nauc_mrr_at_5_diff1
value: 30.519933718405962
- type: nauc_mrr_at_5_max
value: 20.873028786250366
- type: nauc_mrr_at_5_std
value: 31.53952703715278
- type: nauc_ndcg_at_1000_diff1
value: 19.56599546833078
- type: nauc_ndcg_at_1000_max
value: 31.55417192496882
- type: nauc_ndcg_at_1000_std
value: 46.03469380933216
- type: nauc_ndcg_at_100_diff1
value: 17.03409656600608
- type: nauc_ndcg_at_100_max
value: 30.018921010755896
- type: nauc_ndcg_at_100_std
value: 42.083969481235535
- type: nauc_ndcg_at_10_diff1
value: 9.622601053598032
- type: nauc_ndcg_at_10_max
value: 24.036876646465473
- type: nauc_ndcg_at_10_std
value: 29.264022469658542
- type: nauc_ndcg_at_1_diff1
value: 10.162034267788544
- type: nauc_ndcg_at_1_max
value: 14.902101527295905
- type: nauc_ndcg_at_1_std
value: 22.89481729606148
- type: nauc_ndcg_at_20_diff1
value: 11.827596896516578
- type: nauc_ndcg_at_20_max
value: 21.89722632493682
- type: nauc_ndcg_at_20_std
value: 34.10813108354046
- type: nauc_ndcg_at_3_diff1
value: 9.885830514681343
- type: nauc_ndcg_at_3_max
value: 18.645371242229174
- type: nauc_ndcg_at_3_std
value: 27.61014855490183
- type: nauc_ndcg_at_5_diff1
value: 7.016021785588281
- type: nauc_ndcg_at_5_max
value: 21.223071359768444
- type: nauc_ndcg_at_5_std
value: 26.398061449644693
- type: nauc_precision_at_1000_diff1
value: 21.951465290665013
- type: nauc_precision_at_1000_max
value: 29.28795349580752
- type: nauc_precision_at_1000_std
value: 43.851885410437404
- type: nauc_precision_at_100_diff1
value: 20.103205413776266
- type: nauc_precision_at_100_max
value: 29.53467404908886
- type: nauc_precision_at_100_std
value: 43.41214281168461
- type: nauc_precision_at_10_diff1
value: 9.327632341614823
- type: nauc_precision_at_10_max
value: 27.739929968318993
- type: nauc_precision_at_10_std
value: 30.029060765584443
- type: nauc_precision_at_1_diff1
value: 25.319658305674935
- type: nauc_precision_at_1_max
value: 19.408020022852174
- type: nauc_precision_at_1_std
value: 30.518526579248036
- type: nauc_precision_at_20_diff1
value: 12.507551705078598
- type: nauc_precision_at_20_max
value: 25.437784661790673
- type: nauc_precision_at_20_std
value: 37.6038493343788
- type: nauc_precision_at_3_diff1
value: 17.302840903240426
- type: nauc_precision_at_3_max
value: 18.240884706076184
- type: nauc_precision_at_3_std
value: 32.34758075311221
- type: nauc_precision_at_5_diff1
value: 10.643711764387417
- type: nauc_precision_at_5_max
value: 24.411239239889554
- type: nauc_precision_at_5_std
value: 28.767392128200953
- type: nauc_recall_at_1000_diff1
value: 18.932208342315853
- type: nauc_recall_at_1000_max
value: 28.482052015706234
- type: nauc_recall_at_1000_std
value: 44.983993721189705
- type: nauc_recall_at_100_diff1
value: 12.30127094174658
- type: nauc_recall_at_100_max
value: 25.614395729836016
- type: nauc_recall_at_100_std
value: 40.04868566707452
- type: nauc_recall_at_10_diff1
value: -4.63806503951543
- type: nauc_recall_at_10_max
value: 25.05145496553497
- type: nauc_recall_at_10_std
value: 24.09893875274637
- type: nauc_recall_at_1_diff1
value: 19.602070971946954
- type: nauc_recall_at_1_max
value: 8.20575258643758
- type: nauc_recall_at_1_std
value: 17.150126202821102
- type: nauc_recall_at_20_diff1
value: 3.229932027028801
- type: nauc_recall_at_20_max
value: 18.794275827349168
- type: nauc_recall_at_20_std
value: 30.248974156728046
- type: nauc_recall_at_3_diff1
value: 15.00878750843053
- type: nauc_recall_at_3_max
value: 9.046387583277276
- type: nauc_recall_at_3_std
value: 22.79927256744018
- type: nauc_recall_at_5_diff1
value: 1.9090462818828973
- type: nauc_recall_at_5_max
value: 17.416622454402713
- type: nauc_recall_at_5_std
value: 21.915265437836833
- type: ndcg_at_1
value: 45.0
- type: ndcg_at_10
value: 39.983999999999995
- type: ndcg_at_100
value: 27.095999999999997
- type: ndcg_at_1000
value: 24.454
- type: ndcg_at_20
value: 37.319
- type: ndcg_at_3
value: 43.704
- type: ndcg_at_5
value: 41.568
- type: precision_at_1
value: 52.0
- type: precision_at_10
value: 42.6
- type: precision_at_100
value: 27.72
- type: precision_at_1000
value: 11.844000000000001
- type: precision_at_20
value: 39.6
- type: precision_at_3
value: 48.667
- type: precision_at_5
value: 45.6
- type: recall_at_1
value: 0.106
- type: recall_at_10
value: 0.9159999999999999
- type: recall_at_100
value: 5.715
- type: recall_at_1000
value: 23.662
- type: recall_at_20
value: 1.7160000000000002
- type: recall_at_3
value: 0.302
- type: recall_at_5
value: 0.482
- task:
type: Retrieval
dataset:
name: MTEB Touche2020 (default)
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 13.753000000000002
- type: map_at_1
value: 1.5970000000000002
- type: map_at_10
value: 4.601
- type: map_at_100
value: 7.7700000000000005
- type: map_at_1000
value: 9.096
- type: map_at_20
value: 5.817
- type: map_at_3
value: 2.377
- type: map_at_5
value: 2.98
- type: mrr_at_1
value: 22.448979591836736
- type: mrr_at_10
value: 33.38030450275348
- type: mrr_at_100
value: 35.01828931874863
- type: mrr_at_1000
value: 35.037725664715595
- type: mrr_at_20
value: 34.6865889212828
- type: mrr_at_3
value: 28.231292517006807
- type: mrr_at_5
value: 31.394557823129254
- type: nauc_map_at_1000_diff1
value: -11.252417383140266
- type: nauc_map_at_1000_max
value: -37.24375623641661
- type: nauc_map_at_1000_std
value: -38.122086330314595
- type: nauc_map_at_100_diff1
value: -13.970621196322664
- type: nauc_map_at_100_max
value: -39.871220844684366
- type: nauc_map_at_100_std
value: -41.05324590181932
- type: nauc_map_at_10_diff1
value: -12.163263778180402
- type: nauc_map_at_10_max
value: -36.76984556993433
- type: nauc_map_at_10_std
value: -37.53503392844242
- type: nauc_map_at_1_diff1
value: -21.481769300580112
- type: nauc_map_at_1_max
value: -34.78475326600437
- type: nauc_map_at_1_std
value: -31.34442054238037
- type: nauc_map_at_20_diff1
value: -14.607331295503842
- type: nauc_map_at_20_max
value: -40.507883730110066
- type: nauc_map_at_20_std
value: -42.25172210956502
- type: nauc_map_at_3_diff1
value: -16.11765086583003
- type: nauc_map_at_3_max
value: -39.875149479128375
- type: nauc_map_at_3_std
value: -36.495342441290575
- type: nauc_map_at_5_diff1
value: -12.762015642768567
- type: nauc_map_at_5_max
value: -35.84513643191068
- type: nauc_map_at_5_std
value: -34.507874404019105
- type: nauc_mrr_at_1000_diff1
value: -14.380678398651431
- type: nauc_mrr_at_1000_max
value: -34.916144132151764
- type: nauc_mrr_at_1000_std
value: -37.97719898398948
- type: nauc_mrr_at_100_diff1
value: -14.315571331226579
- type: nauc_mrr_at_100_max
value: -34.82941353583672
- type: nauc_mrr_at_100_std
value: -37.88850059416566
- type: nauc_mrr_at_10_diff1
value: -15.357854232460392
- type: nauc_mrr_at_10_max
value: -35.50556512154432
- type: nauc_mrr_at_10_std
value: -39.177327110088726
- type: nauc_mrr_at_1_diff1
value: -20.81375579297355
- type: nauc_mrr_at_1_max
value: -29.68218990777337
- type: nauc_mrr_at_1_std
value: -32.340167902766225
- type: nauc_mrr_at_20_diff1
value: -14.007415589033556
- type: nauc_mrr_at_20_max
value: -35.07243301300378
- type: nauc_mrr_at_20_std
value: -38.4083789449898
- type: nauc_mrr_at_3_diff1
value: -18.09416617081835
- type: nauc_mrr_at_3_max
value: -36.95185320631812
- type: nauc_mrr_at_3_std
value: -35.64342684468998
- type: nauc_mrr_at_5_diff1
value: -15.183051674277138
- type: nauc_mrr_at_5_max
value: -34.67724348034976
- type: nauc_mrr_at_5_std
value: -35.5955991849333
- type: nauc_ndcg_at_1000_diff1
value: 0.8638249190254136
- type: nauc_ndcg_at_1000_max
value: -27.240531292789573
- type: nauc_ndcg_at_1000_std
value: -26.34406627094641
- type: nauc_ndcg_at_100_diff1
value: -10.272509858747428
- type: nauc_ndcg_at_100_max
value: -40.27645670071093
- type: nauc_ndcg_at_100_std
value: -40.20324905617718
- type: nauc_ndcg_at_10_diff1
value: -10.251898880214641
- type: nauc_ndcg_at_10_max
value: -31.66063506955603
- type: nauc_ndcg_at_10_std
value: -35.18245248110904
- type: nauc_ndcg_at_1_diff1
value: -22.15796091381088
- type: nauc_ndcg_at_1_max
value: -28.012386493294734
- type: nauc_ndcg_at_1_std
value: -28.75534254770048
- type: nauc_ndcg_at_20_diff1
value: -13.257359699197114
- type: nauc_ndcg_at_20_max
value: -39.25007814100781
- type: nauc_ndcg_at_20_std
value: -41.74617039563512
- type: nauc_ndcg_at_3_diff1
value: -14.633327352889419
- type: nauc_ndcg_at_3_max
value: -35.76970667496168
- type: nauc_ndcg_at_3_std
value: -34.78512355124301
- type: nauc_ndcg_at_5_diff1
value: -9.008702427186012
- type: nauc_ndcg_at_5_max
value: -27.057510395795788
- type: nauc_ndcg_at_5_std
value: -31.06336991460067
- type: nauc_precision_at_1000_diff1
value: 24.915422567175415
- type: nauc_precision_at_1000_max
value: 47.53560015584683
- type: nauc_precision_at_1000_std
value: 38.21701614763806
- type: nauc_precision_at_100_diff1
value: 6.645491992850349
- type: nauc_precision_at_100_max
value: -14.578256280924878
- type: nauc_precision_at_100_std
value: -23.049085659678926
- type: nauc_precision_at_10_diff1
value: -0.9667619260601806
- type: nauc_precision_at_10_max
value: -25.529150834147217
- type: nauc_precision_at_10_std
value: -35.81209624358855
- type: nauc_precision_at_1_diff1
value: -20.81375579297355
- type: nauc_precision_at_1_max
value: -29.68218990777337
- type: nauc_precision_at_1_std
value: -32.340167902766225
- type: nauc_precision_at_20_diff1
value: -5.664913271170427
- type: nauc_precision_at_20_max
value: -31.789766954167682
- type: nauc_precision_at_20_std
value: -43.24957806575219
- type: nauc_precision_at_3_diff1
value: -8.78321692449596
- type: nauc_precision_at_3_max
value: -40.94190027571407
- type: nauc_precision_at_3_std
value: -40.42051526602616
- type: nauc_precision_at_5_diff1
value: -0.6700857649701735
- type: nauc_precision_at_5_max
value: -25.396527239026117
- type: nauc_precision_at_5_std
value: -31.60992759387055
- type: nauc_recall_at_1000_diff1
value: 6.608885618295343
- type: nauc_recall_at_1000_max
value: -17.90157348658524
- type: nauc_recall_at_1000_std
value: 1.4128128959708763
- type: nauc_recall_at_100_diff1
value: -10.790017345080633
- type: nauc_recall_at_100_max
value: -42.67969932770011
- type: nauc_recall_at_100_std
value: -36.57531070739207
- type: nauc_recall_at_10_diff1
value: -9.632249853815987
- type: nauc_recall_at_10_max
value: -35.775869145222444
- type: nauc_recall_at_10_std
value: -38.6290217611413
- type: nauc_recall_at_1_diff1
value: -21.481769300580112
- type: nauc_recall_at_1_max
value: -34.78475326600437
- type: nauc_recall_at_1_std
value: -31.34442054238037
- type: nauc_recall_at_20_diff1
value: -16.584366120363462
- type: nauc_recall_at_20_max
value: -45.0011419751979
- type: nauc_recall_at_20_std
value: -46.22137916249736
- type: nauc_recall_at_3_diff1
value: -16.227776403050605
- type: nauc_recall_at_3_max
value: -46.19831636902846
- type: nauc_recall_at_3_std
value: -39.31769096438802
- type: nauc_recall_at_5_diff1
value: -8.463083898122722
- type: nauc_recall_at_5_max
value: -34.1285878720165
- type: nauc_recall_at_5_std
value: -33.56523176213727
- type: ndcg_at_1
value: 19.387999999999998
- type: ndcg_at_10
value: 13.753000000000002
- type: ndcg_at_100
value: 23.552
- type: ndcg_at_1000
value: 36.061
- type: ndcg_at_20
value: 15.113999999999999
- type: ndcg_at_3
value: 14.994
- type: ndcg_at_5
value: 13.927
- type: precision_at_1
value: 22.448999999999998
- type: precision_at_10
value: 13.469000000000001
- type: precision_at_100
value: 5.531
- type: precision_at_1000
value: 1.333
- type: precision_at_20
value: 11.224
- type: precision_at_3
value: 15.645999999999999
- type: precision_at_5
value: 14.693999999999999
- type: recall_at_1
value: 1.5970000000000002
- type: recall_at_10
value: 9.428
- type: recall_at_100
value: 34.227000000000004
- type: recall_at_1000
value: 72.233
- type: recall_at_20
value: 15.456
- type: recall_at_3
value: 3.024
- type: recall_at_5
value: 4.776
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification (default)
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 65.6884765625
- type: ap
value: 11.395400787741414
- type: ap_weighted
value: 11.395400787741414
- type: f1
value: 49.997667284332806
- type: f1_weighted
value: 73.34420433686675
- type: main_score
value: 65.6884765625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification (default)
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 49.83305036785513
- type: f1
value: 49.97910620163813
- type: f1_weighted
value: 49.32130156716104
- type: main_score
value: 49.83305036785513
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering (default)
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 25.27920179659098
- type: v_measure
value: 25.27920179659098
- type: v_measure_std
value: 2.092324622279832
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015 (default)
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 82.19586338439531
- type: cosine_accuracy_threshold
value: 75.0169038772583
- type: cosine_ap
value: 60.22081236487149
- type: cosine_f1
value: 57.192894671003245
- type: cosine_f1_threshold
value: 69.5034384727478
- type: cosine_precision
value: 54.3767840152236
- type: cosine_recall
value: 60.31662269129288
- type: dot_accuracy
value: 77.92215533170412
- type: dot_accuracy_threshold
value: 106759.60693359375
- type: dot_ap
value: 40.49772647740827
- type: dot_f1
value: 46.14293314417449
- type: dot_f1_threshold
value: 67732.36083984375
- type: dot_precision
value: 34.748931623931625
- type: dot_recall
value: 68.65435356200528
- type: euclidean_accuracy
value: 80.45538534898968
- type: euclidean_accuracy_threshold
value: 2147.9385375976562
- type: euclidean_ap
value: 52.814058086493475
- type: euclidean_f1
value: 50.80232161147149
- type: euclidean_f1_threshold
value: 2624.5105743408203
- type: euclidean_precision
value: 44.66680008004803
- type: euclidean_recall
value: 58.89182058047493
- type: main_score
value: 60.22081236487149
- type: manhattan_accuracy
value: 80.53883292602968
- type: manhattan_accuracy_threshold
value: 27107.672119140625
- type: manhattan_ap
value: 53.53662771884282
- type: manhattan_f1
value: 51.65052816901407
- type: manhattan_f1_threshold
value: 33232.24792480469
- type: manhattan_precision
value: 44.299735749339376
- type: manhattan_recall
value: 61.92612137203166
- type: max_accuracy
value: 82.19586338439531
- type: max_ap
value: 60.22081236487149
- type: max_f1
value: 57.192894671003245
- type: max_precision
value: 54.3767840152236
- type: max_recall
value: 68.65435356200528
- type: similarity_accuracy
value: 82.19586338439531
- type: similarity_accuracy_threshold
value: 75.0169038772583
- type: similarity_ap
value: 60.22081236487149
- type: similarity_f1
value: 57.192894671003245
- type: similarity_f1_threshold
value: 69.5034384727478
- type: similarity_precision
value: 54.3767840152236
- type: similarity_recall
value: 60.31662269129288
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus (default)
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 85.86758256684907
- type: cosine_accuracy_threshold
value: 73.03299903869629
- type: cosine_ap
value: 78.79896751132692
- type: cosine_f1
value: 70.93762938984453
- type: cosine_f1_threshold
value: 69.51396465301514
- type: cosine_precision
value: 69.39391707784078
- type: cosine_recall
value: 72.55158607945796
- type: dot_accuracy
value: 81.69169868436373
- type: dot_accuracy_threshold
value: 51796.2890625
- type: dot_ap
value: 66.49022700054283
- type: dot_f1
value: 62.167484157387854
- type: dot_f1_threshold
value: 42622.021484375
- type: dot_precision
value: 58.10078297530617
- type: dot_recall
value: 66.84631967970435
- type: euclidean_accuracy
value: 83.17809601428183
- type: euclidean_accuracy_threshold
value: 1687.9749298095703
- type: euclidean_ap
value: 70.39367677734302
- type: euclidean_f1
value: 62.79221027661935
- type: euclidean_f1_threshold
value: 1905.8393478393555
- type: euclidean_precision
value: 62.40778766446118
- type: euclidean_recall
value: 63.181398213735754
- type: main_score
value: 78.79896751132692
- type: manhattan_accuracy
value: 83.23631000892615
- type: manhattan_accuracy_threshold
value: 21191.021728515625
- type: manhattan_ap
value: 70.60408795606112
- type: manhattan_f1
value: 62.99311208515969
- type: manhattan_f1_threshold
value: 23671.893310546875
- type: manhattan_precision
value: 64.05603311047437
- type: manhattan_recall
value: 61.964890668309216
- type: max_accuracy
value: 85.86758256684907
- type: max_ap
value: 78.79896751132692
- type: max_f1
value: 70.93762938984453
- type: max_precision
value: 69.39391707784078
- type: max_recall
value: 72.55158607945796
- type: similarity_accuracy
value: 85.86758256684907
- type: similarity_accuracy_threshold
value: 73.03299903869629
- type: similarity_ap
value: 78.79896751132692
- type: similarity_f1
value: 70.93762938984453
- type: similarity_f1_threshold
value: 69.51396465301514
- type: similarity_precision
value: 69.39391707784078
- type: similarity_recall
value: 72.55158607945796
---
# M2V_base_glove_subword Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
## Installation
Install model2vec using pip:
```
pip install model2vec
```
## Usage
Load this model using the `from_pretrained` method:
```python
from model2vec import StaticModel
# Load a pretrained Model2Vec model
model = StaticModel.from_pretrained("minishlab/M2V_base_glove_subword")
# Compute text embeddings
embeddings = model.encode(["Example sentence"])
```
Alternatively, you can distill your own model using the `distill` method:
```python
from model2vec.distill import distill
# Choose a Sentence Transformer model
model_name = "BAAI/bge-base-en-v1.5"
# Distill the model
m2v_model = distill(model_name=model_name, pca_dims=256)
# Save the model
m2v_model.save_pretrained("m2v_model")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF | BenevolenceMessiah | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:quantized:nomic-ai/nomic-embed-text-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-12-15T01:31:40 | 2024-12-15T01:31:43 | 44 | 0 | ---
base_model: nomic-ai/nomic-embed-text-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.20895522388058
- type: ap
value: 38.57605549557802
- type: f1
value: 69.35586565857854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.8144
- type: ap
value: 88.65222882032363
- type: f1
value: 91.80426301643274
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.162000000000006
- type: f1
value: 46.59329642263158
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.253
- type: map_at_10
value: 38.962
- type: map_at_100
value: 40.081
- type: map_at_1000
value: 40.089000000000006
- type: map_at_3
value: 33.499
- type: map_at_5
value: 36.351
- type: mrr_at_1
value: 24.609
- type: mrr_at_10
value: 39.099000000000004
- type: mrr_at_100
value: 40.211000000000006
- type: mrr_at_1000
value: 40.219
- type: mrr_at_3
value: 33.677
- type: mrr_at_5
value: 36.469
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 48.010999999999996
- type: ndcg_at_100
value: 52.756
- type: ndcg_at_1000
value: 52.964999999999996
- type: ndcg_at_3
value: 36.564
- type: ndcg_at_5
value: 41.711999999999996
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 7.738
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.149000000000001
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 77.383
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 57.965999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.69069567851087
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.35185490976283
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.71274951450321
- type: mrr
value: 76.06032625423207
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.73980520022269
- type: cos_sim_spearman
value: 84.24649792685918
- type: euclidean_pearson
value: 85.85197641158186
- type: euclidean_spearman
value: 84.24649792685918
- type: manhattan_pearson
value: 86.26809552711346
- type: manhattan_spearman
value: 84.56397504030865
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.25324675324674
- type: f1
value: 84.17872280892557
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.770253446400886
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.94307095497281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.164
- type: map_at_10
value: 42.641
- type: map_at_100
value: 43.947
- type: map_at_1000
value: 44.074999999999996
- type: map_at_3
value: 39.592
- type: map_at_5
value: 41.204
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 48.625
- type: mrr_at_100
value: 49.368
- type: mrr_at_1000
value: 49.413000000000004
- type: mrr_at_3
value: 46.400000000000006
- type: mrr_at_5
value: 47.68
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 48.564
- type: ndcg_at_100
value: 53.507000000000005
- type: ndcg_at_1000
value: 55.635999999999996
- type: ndcg_at_3
value: 44.471
- type: ndcg_at_5
value: 46.137
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.164
- type: recall_at_10
value: 59.609
- type: recall_at_100
value: 80.521
- type: recall_at_1000
value: 94.245
- type: recall_at_3
value: 46.521
- type: recall_at_5
value: 52.083999999999996
- type: map_at_1
value: 31.526
- type: map_at_10
value: 41.581
- type: map_at_100
value: 42.815999999999995
- type: map_at_1000
value: 42.936
- type: map_at_3
value: 38.605000000000004
- type: map_at_5
value: 40.351
- type: mrr_at_1
value: 39.489999999999995
- type: mrr_at_10
value: 47.829
- type: mrr_at_100
value: 48.512
- type: mrr_at_1000
value: 48.552
- type: mrr_at_3
value: 45.754
- type: mrr_at_5
value: 46.986
- type: ndcg_at_1
value: 39.489999999999995
- type: ndcg_at_10
value: 47.269
- type: ndcg_at_100
value: 51.564
- type: ndcg_at_1000
value: 53.53099999999999
- type: ndcg_at_3
value: 43.301
- type: ndcg_at_5
value: 45.239000000000004
- type: precision_at_1
value: 39.489999999999995
- type: precision_at_10
value: 8.93
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.865999999999998
- type: recall_at_1
value: 31.526
- type: recall_at_10
value: 56.76
- type: recall_at_100
value: 75.029
- type: recall_at_1000
value: 87.491
- type: recall_at_3
value: 44.786
- type: recall_at_5
value: 50.254
- type: map_at_1
value: 40.987
- type: map_at_10
value: 52.827
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.81
- type: map_at_3
value: 49.844
- type: map_at_5
value: 51.473
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.389
- type: mrr_at_100
value: 57.003
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.486999999999995
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.372
- type: ndcg_at_100
value: 62.068
- type: ndcg_at_1000
value: 63.288
- type: ndcg_at_3
value: 53.400000000000006
- type: ndcg_at_5
value: 55.766000000000005
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.448
- type: precision_at_5
value: 15.862000000000002
- type: recall_at_1
value: 40.987
- type: recall_at_10
value: 71.146
- type: recall_at_100
value: 87.035
- type: recall_at_1000
value: 95.633
- type: recall_at_3
value: 58.025999999999996
- type: recall_at_5
value: 63.815999999999995
- type: map_at_1
value: 24.587
- type: map_at_10
value: 33.114
- type: map_at_100
value: 34.043
- type: map_at_1000
value: 34.123999999999995
- type: map_at_3
value: 30.45
- type: map_at_5
value: 31.813999999999997
- type: mrr_at_1
value: 26.554
- type: mrr_at_10
value: 35.148
- type: mrr_at_100
value: 35.926
- type: mrr_at_1000
value: 35.991
- type: mrr_at_3
value: 32.599000000000004
- type: mrr_at_5
value: 33.893
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 38.132
- type: ndcg_at_100
value: 42.78
- type: ndcg_at_1000
value: 44.919
- type: ndcg_at_3
value: 32.833
- type: ndcg_at_5
value: 35.168
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.861
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 24.587
- type: recall_at_10
value: 51.690000000000005
- type: recall_at_100
value: 73.428
- type: recall_at_1000
value: 89.551
- type: recall_at_3
value: 37.336999999999996
- type: recall_at_5
value: 43.047000000000004
- type: map_at_1
value: 16.715
- type: map_at_10
value: 24.251
- type: map_at_100
value: 25.326999999999998
- type: map_at_1000
value: 25.455
- type: map_at_3
value: 21.912000000000003
- type: map_at_5
value: 23.257
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.552
- type: mrr_at_100
value: 29.42
- type: mrr_at_1000
value: 29.497
- type: mrr_at_3
value: 26.14
- type: mrr_at_5
value: 27.502
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.088
- type: ndcg_at_100
value: 34.293
- type: ndcg_at_1000
value: 37.271
- type: ndcg_at_3
value: 24.708
- type: ndcg_at_5
value: 26.809
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.361
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.556999999999999
- type: recall_at_1
value: 16.715
- type: recall_at_10
value: 39.587
- type: recall_at_100
value: 62.336000000000006
- type: recall_at_1000
value: 83.453
- type: recall_at_3
value: 27.839999999999996
- type: recall_at_5
value: 32.952999999999996
- type: map_at_1
value: 28.793000000000003
- type: map_at_10
value: 38.582
- type: map_at_100
value: 39.881
- type: map_at_1000
value: 39.987
- type: map_at_3
value: 35.851
- type: map_at_5
value: 37.289
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.909
- type: mrr_at_100
value: 44.74
- type: mrr_at_1000
value: 44.786
- type: mrr_at_3
value: 41.659
- type: mrr_at_5
value: 43.010999999999996
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 44.266
- type: ndcg_at_100
value: 49.639
- type: ndcg_at_1000
value: 51.644
- type: ndcg_at_3
value: 39.865
- type: ndcg_at_5
value: 41.887
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 18.831999999999997
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 28.793000000000003
- type: recall_at_10
value: 55.68300000000001
- type: recall_at_100
value: 77.99000000000001
- type: recall_at_1000
value: 91.183
- type: recall_at_3
value: 43.293
- type: recall_at_5
value: 48.618
- type: map_at_1
value: 25.907000000000004
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.806
- type: map_at_1000
value: 36.912
- type: map_at_3
value: 32.748
- type: map_at_5
value: 34.232
- type: mrr_at_1
value: 31.621
- type: mrr_at_10
value: 40.687
- type: mrr_at_100
value: 41.583
- type: mrr_at_1000
value: 41.638999999999996
- type: mrr_at_3
value: 38.527
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 41.003
- type: ndcg_at_100
value: 46.617999999999995
- type: ndcg_at_1000
value: 48.82
- type: ndcg_at_3
value: 36.542
- type: ndcg_at_5
value: 38.368
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 1.191
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 17.39
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 25.907000000000004
- type: recall_at_10
value: 52.115
- type: recall_at_100
value: 76.238
- type: recall_at_1000
value: 91.218
- type: recall_at_3
value: 39.417
- type: recall_at_5
value: 44.435
- type: map_at_1
value: 25.732166666666668
- type: map_at_10
value: 34.51616666666667
- type: map_at_100
value: 35.67241666666666
- type: map_at_1000
value: 35.78675
- type: map_at_3
value: 31.953416666666662
- type: map_at_5
value: 33.333
- type: mrr_at_1
value: 30.300166666666673
- type: mrr_at_10
value: 38.6255
- type: mrr_at_100
value: 39.46183333333334
- type: mrr_at_1000
value: 39.519999999999996
- type: mrr_at_3
value: 36.41299999999999
- type: mrr_at_5
value: 37.6365
- type: ndcg_at_1
value: 30.300166666666673
- type: ndcg_at_10
value: 39.61466666666667
- type: ndcg_at_100
value: 44.60808333333334
- type: ndcg_at_1000
value: 46.91708333333334
- type: ndcg_at_3
value: 35.26558333333333
- type: ndcg_at_5
value: 37.220000000000006
- type: precision_at_1
value: 30.300166666666673
- type: precision_at_10
value: 6.837416666666667
- type: precision_at_100
value: 1.10425
- type: precision_at_1000
value: 0.14875
- type: precision_at_3
value: 16.13716666666667
- type: precision_at_5
value: 11.2815
- type: recall_at_1
value: 25.732166666666668
- type: recall_at_10
value: 50.578916666666665
- type: recall_at_100
value: 72.42183333333334
- type: recall_at_1000
value: 88.48766666666667
- type: recall_at_3
value: 38.41325
- type: recall_at_5
value: 43.515750000000004
- type: map_at_1
value: 23.951
- type: map_at_10
value: 30.974
- type: map_at_100
value: 31.804
- type: map_at_1000
value: 31.900000000000002
- type: map_at_3
value: 28.762
- type: map_at_5
value: 29.94
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.297
- type: mrr_at_1000
value: 34.36
- type: mrr_at_3
value: 31.391000000000002
- type: mrr_at_5
value: 32.525999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 35.112
- type: ndcg_at_100
value: 39.28
- type: ndcg_at_1000
value: 41.723
- type: ndcg_at_3
value: 30.902
- type: ndcg_at_5
value: 32.759
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.445
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.951
- type: recall_at_10
value: 45.24
- type: recall_at_100
value: 64.12299999999999
- type: recall_at_1000
value: 82.28999999999999
- type: recall_at_3
value: 33.806000000000004
- type: recall_at_5
value: 38.277
- type: map_at_1
value: 16.829
- type: map_at_10
value: 23.684
- type: map_at_100
value: 24.683
- type: map_at_1000
value: 24.81
- type: map_at_3
value: 21.554000000000002
- type: map_at_5
value: 22.768
- type: mrr_at_1
value: 20.096
- type: mrr_at_10
value: 27.230999999999998
- type: mrr_at_100
value: 28.083999999999996
- type: mrr_at_1000
value: 28.166000000000004
- type: mrr_at_3
value: 25.212
- type: mrr_at_5
value: 26.32
- type: ndcg_at_1
value: 20.096
- type: ndcg_at_10
value: 27.989000000000004
- type: ndcg_at_100
value: 32.847
- type: ndcg_at_1000
value: 35.896
- type: ndcg_at_3
value: 24.116
- type: ndcg_at_5
value: 25.964
- type: precision_at_1
value: 20.096
- type: precision_at_10
value: 5
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.207
- type: precision_at_5
value: 8.08
- type: recall_at_1
value: 16.829
- type: recall_at_10
value: 37.407000000000004
- type: recall_at_100
value: 59.101000000000006
- type: recall_at_1000
value: 81.024
- type: recall_at_3
value: 26.739
- type: recall_at_5
value: 31.524
- type: map_at_1
value: 24.138
- type: map_at_10
value: 32.275999999999996
- type: map_at_100
value: 33.416000000000004
- type: map_at_1000
value: 33.527
- type: map_at_3
value: 29.854000000000003
- type: map_at_5
value: 31.096
- type: mrr_at_1
value: 28.450999999999997
- type: mrr_at_10
value: 36.214
- type: mrr_at_100
value: 37.134
- type: mrr_at_1000
value: 37.198
- type: mrr_at_3
value: 34.001999999999995
- type: mrr_at_5
value: 35.187000000000005
- type: ndcg_at_1
value: 28.450999999999997
- type: ndcg_at_10
value: 37.166
- type: ndcg_at_100
value: 42.454
- type: ndcg_at_1000
value: 44.976
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 34.631
- type: precision_at_1
value: 28.450999999999997
- type: precision_at_10
value: 6.241
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.801
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 24.138
- type: recall_at_10
value: 48.111
- type: recall_at_100
value: 71.245
- type: recall_at_1000
value: 88.986
- type: recall_at_3
value: 36.119
- type: recall_at_5
value: 40.846
- type: map_at_1
value: 23.244
- type: map_at_10
value: 31.227
- type: map_at_100
value: 33.007
- type: map_at_1000
value: 33.223
- type: map_at_3
value: 28.924
- type: map_at_5
value: 30.017
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 35.524
- type: mrr_at_100
value: 36.699
- type: mrr_at_1000
value: 36.759
- type: mrr_at_3
value: 33.366
- type: mrr_at_5
value: 34.552
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 36.381
- type: ndcg_at_100
value: 43.062
- type: ndcg_at_1000
value: 45.656
- type: ndcg_at_3
value: 32.501999999999995
- type: ndcg_at_5
value: 34.105999999999995
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.798
- type: precision_at_100
value: 1.492
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.244
- type: recall_at_10
value: 45.979
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 91.078
- type: recall_at_3
value: 34.925
- type: recall_at_5
value: 39.126
- type: map_at_1
value: 19.945
- type: map_at_10
value: 27.517999999999997
- type: map_at_100
value: 28.588
- type: map_at_1000
value: 28.682000000000002
- type: map_at_3
value: 25.345000000000002
- type: map_at_5
value: 26.555
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.845
- type: mrr_at_100
value: 30.775999999999996
- type: mrr_at_1000
value: 30.845
- type: mrr_at_3
value: 27.726
- type: mrr_at_5
value: 28.882
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 32.034
- type: ndcg_at_100
value: 37.185
- type: ndcg_at_1000
value: 39.645
- type: ndcg_at_3
value: 27.750999999999998
- type: ndcg_at_5
value: 29.805999999999997
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.945
- type: recall_at_10
value: 43.62
- type: recall_at_100
value: 67.194
- type: recall_at_1000
value: 85.7
- type: recall_at_3
value: 32.15
- type: recall_at_5
value: 37.208999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.279
- type: map_at_10
value: 31.052999999999997
- type: map_at_100
value: 33.125
- type: map_at_1000
value: 33.306000000000004
- type: map_at_3
value: 26.208
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 42.671
- type: mrr_at_10
value: 54.557
- type: mrr_at_100
value: 55.142
- type: mrr_at_1000
value: 55.169000000000004
- type: mrr_at_3
value: 51.488
- type: mrr_at_5
value: 53.439
- type: ndcg_at_1
value: 42.671
- type: ndcg_at_10
value: 41.276
- type: ndcg_at_100
value: 48.376000000000005
- type: ndcg_at_1000
value: 51.318
- type: ndcg_at_3
value: 35.068
- type: ndcg_at_5
value: 37.242
- type: precision_at_1
value: 42.671
- type: precision_at_10
value: 12.638
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 26.08
- type: precision_at_5
value: 19.805
- type: recall_at_1
value: 18.279
- type: recall_at_10
value: 46.946
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 87.107
- type: recall_at_3
value: 31.147999999999996
- type: recall_at_5
value: 38.099
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.573
- type: map_at_10
value: 19.747
- type: map_at_100
value: 28.205000000000002
- type: map_at_1000
value: 29.831000000000003
- type: map_at_3
value: 14.109
- type: map_at_5
value: 16.448999999999998
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 77.68599999999999
- type: mrr_at_100
value: 77.995
- type: mrr_at_1000
value: 78.00200000000001
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.029
- type: ndcg_at_1
value: 59.12500000000001
- type: ndcg_at_10
value: 43.9
- type: ndcg_at_100
value: 47.863
- type: ndcg_at_1000
value: 54.848
- type: ndcg_at_3
value: 49.803999999999995
- type: ndcg_at_5
value: 46.317
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.4
- type: precision_at_100
value: 11.063
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 52.333
- type: precision_at_5
value: 43.7
- type: recall_at_1
value: 8.573
- type: recall_at_10
value: 25.615
- type: recall_at_100
value: 53.385000000000005
- type: recall_at_1000
value: 75.46000000000001
- type: recall_at_3
value: 15.429
- type: recall_at_5
value: 19.357
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.989999999999995
- type: f1
value: 42.776314451497555
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.13499999999999
- type: map_at_10
value: 82.825
- type: map_at_100
value: 83.096
- type: map_at_1000
value: 83.111
- type: map_at_3
value: 81.748
- type: map_at_5
value: 82.446
- type: mrr_at_1
value: 79.553
- type: mrr_at_10
value: 86.654
- type: mrr_at_100
value: 86.774
- type: mrr_at_1000
value: 86.778
- type: mrr_at_3
value: 85.981
- type: mrr_at_5
value: 86.462
- type: ndcg_at_1
value: 79.553
- type: ndcg_at_10
value: 86.345
- type: ndcg_at_100
value: 87.32
- type: ndcg_at_1000
value: 87.58200000000001
- type: ndcg_at_3
value: 84.719
- type: ndcg_at_5
value: 85.677
- type: precision_at_1
value: 79.553
- type: precision_at_10
value: 10.402000000000001
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.413
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 74.13499999999999
- type: recall_at_10
value: 93.215
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.732
- type: recall_at_3
value: 88.79
- type: recall_at_5
value: 91.259
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.298000000000002
- type: map_at_10
value: 29.901
- type: map_at_100
value: 31.528
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 25.740000000000002
- type: map_at_5
value: 28.227999999999998
- type: mrr_at_1
value: 36.728
- type: mrr_at_10
value: 45.401
- type: mrr_at_100
value: 46.27
- type: mrr_at_1000
value: 46.315
- type: mrr_at_3
value: 42.978
- type: mrr_at_5
value: 44.29
- type: ndcg_at_1
value: 36.728
- type: ndcg_at_10
value: 37.456
- type: ndcg_at_100
value: 43.832
- type: ndcg_at_1000
value: 47
- type: ndcg_at_3
value: 33.694
- type: ndcg_at_5
value: 35.085
- type: precision_at_1
value: 36.728
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.701
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 22.479
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.298000000000002
- type: recall_at_10
value: 44.369
- type: recall_at_100
value: 68.098
- type: recall_at_1000
value: 87.21900000000001
- type: recall_at_3
value: 30.215999999999998
- type: recall_at_5
value: 36.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.568
- type: map_at_10
value: 65.061
- type: map_at_100
value: 65.896
- type: map_at_1000
value: 65.95100000000001
- type: map_at_3
value: 61.831
- type: map_at_5
value: 63.849000000000004
- type: mrr_at_1
value: 79.136
- type: mrr_at_10
value: 84.58200000000001
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.772
- type: mrr_at_3
value: 83.684
- type: mrr_at_5
value: 84.223
- type: ndcg_at_1
value: 79.136
- type: ndcg_at_10
value: 72.622
- type: ndcg_at_100
value: 75.539
- type: ndcg_at_1000
value: 76.613
- type: ndcg_at_3
value: 68.065
- type: ndcg_at_5
value: 70.58
- type: precision_at_1
value: 79.136
- type: precision_at_10
value: 15.215
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 44.011
- type: precision_at_5
value: 28.388999999999996
- type: recall_at_1
value: 39.568
- type: recall_at_10
value: 76.077
- type: recall_at_100
value: 87.481
- type: recall_at_1000
value: 94.56400000000001
- type: recall_at_3
value: 66.01599999999999
- type: recall_at_5
value: 70.97200000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.312
- type: ap
value: 80.36296867333715
- type: f1
value: 85.26613311552218
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 35.711999999999996
- type: map_at_100
value: 36.876999999999995
- type: map_at_1000
value: 36.923
- type: map_at_3
value: 32.034
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 36.345
- type: mrr_at_100
value: 37.441
- type: mrr_at_1000
value: 37.480000000000004
- type: mrr_at_3
value: 32.713
- type: mrr_at_5
value: 34.824
- type: ndcg_at_1
value: 24.026
- type: ndcg_at_10
value: 42.531
- type: ndcg_at_100
value: 48.081
- type: ndcg_at_1000
value: 49.213
- type: ndcg_at_3
value: 35.044
- type: ndcg_at_5
value: 38.834
- type: precision_at_1
value: 24.026
- type: precision_at_10
value: 6.622999999999999
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.909
- type: precision_at_5
value: 10.871
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 63.426
- type: recall_at_100
value: 88.96300000000001
- type: recall_at_1000
value: 97.637
- type: recall_at_3
value: 43.095
- type: recall_at_5
value: 52.178000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.0095759233926
- type: f1
value: 92.78387794667408
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.0296397628819
- type: f1
value: 58.45699589820874
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.45662407531944
- type: f1
value: 71.42364781421813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07800941492937
- type: f1
value: 77.22799045640845
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.531234379250606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.941490381193802
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.3115090856725
- type: mrr
value: 31.290667638675757
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.465
- type: map_at_10
value: 13.03
- type: map_at_100
value: 16.057
- type: map_at_1000
value: 17.49
- type: map_at_3
value: 9.553
- type: map_at_5
value: 11.204
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.269
- type: mrr_at_100
value: 53.72
- type: mrr_at_1000
value: 53.761
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 52.461
- type: ndcg_at_1
value: 42.26
- type: ndcg_at_10
value: 34.673
- type: ndcg_at_100
value: 30.759999999999998
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 40.349000000000004
- type: ndcg_at_5
value: 37.915
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.789
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 33.251
- type: recall_at_1
value: 5.465
- type: recall_at_10
value: 17.148
- type: recall_at_100
value: 29.768
- type: recall_at_1000
value: 62.239
- type: recall_at_3
value: 10.577
- type: recall_at_5
value: 13.315
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.008
- type: map_at_10
value: 52.467
- type: map_at_100
value: 53.342999999999996
- type: map_at_1000
value: 53.366
- type: map_at_3
value: 48.412
- type: map_at_5
value: 50.875
- type: mrr_at_1
value: 41.541
- type: mrr_at_10
value: 54.967
- type: mrr_at_100
value: 55.611
- type: mrr_at_1000
value: 55.627
- type: mrr_at_3
value: 51.824999999999996
- type: mrr_at_5
value: 53.763000000000005
- type: ndcg_at_1
value: 41.541
- type: ndcg_at_10
value: 59.724999999999994
- type: ndcg_at_100
value: 63.38700000000001
- type: ndcg_at_1000
value: 63.883
- type: ndcg_at_3
value: 52.331
- type: ndcg_at_5
value: 56.327000000000005
- type: precision_at_1
value: 41.541
- type: precision_at_10
value: 9.447
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.262
- type: precision_at_5
value: 16.314999999999998
- type: recall_at_1
value: 37.008
- type: recall_at_10
value: 79.145
- type: recall_at_100
value: 94.986
- type: recall_at_1000
value: 98.607
- type: recall_at_3
value: 60.277
- type: recall_at_5
value: 69.407
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.402
- type: map_at_10
value: 84.181
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.81400000000001
- type: map_at_3
value: 81.209
- type: map_at_5
value: 83.085
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.263
- type: mrr_at_100
value: 87.36
- type: mrr_at_1000
value: 87.36
- type: mrr_at_3
value: 86.235
- type: mrr_at_5
value: 86.945
- type: ndcg_at_1
value: 81.01
- type: ndcg_at_10
value: 87.99900000000001
- type: ndcg_at_100
value: 89.217
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.703
- type: precision_at_1
value: 81.01
- type: precision_at_10
value: 13.336
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 24.44
- type: recall_at_1
value: 70.402
- type: recall_at_10
value: 95.214
- type: recall_at_100
value: 99.438
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.75699999999999
- type: recall_at_5
value: 91.44099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.51721502758904
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.054808572333016
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.578
- type: map_at_10
value: 11.036999999999999
- type: map_at_100
value: 12.879999999999999
- type: map_at_1000
value: 13.150999999999998
- type: map_at_3
value: 8.133
- type: map_at_5
value: 9.559
- type: mrr_at_1
value: 22.6
- type: mrr_at_10
value: 32.68
- type: mrr_at_100
value: 33.789
- type: mrr_at_1000
value: 33.854
- type: mrr_at_3
value: 29.7
- type: mrr_at_5
value: 31.480000000000004
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_10
value: 18.616
- type: ndcg_at_100
value: 25.883
- type: ndcg_at_1000
value: 30.944
- type: ndcg_at_3
value: 18.136
- type: ndcg_at_5
value: 15.625
- type: precision_at_1
value: 22.6
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.991
- type: precision_at_1000
value: 0.321
- type: precision_at_3
value: 16.8
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.578
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 40.397
- type: recall_at_1000
value: 65.2
- type: recall_at_3
value: 10.208
- type: recall_at_5
value: 13.718
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.44288351714071
- type: cos_sim_spearman
value: 79.37995604564952
- type: euclidean_pearson
value: 81.1078874670718
- type: euclidean_spearman
value: 79.37995905980499
- type: manhattan_pearson
value: 81.03697527288986
- type: manhattan_spearman
value: 79.33490235296236
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.95557650436523
- type: cos_sim_spearman
value: 78.5190672399868
- type: euclidean_pearson
value: 81.58064025904707
- type: euclidean_spearman
value: 78.5190672399868
- type: manhattan_pearson
value: 81.52857930619889
- type: manhattan_spearman
value: 78.50421361308034
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79128416228737
- type: cos_sim_spearman
value: 86.05402451477147
- type: euclidean_pearson
value: 85.46280267054289
- type: euclidean_spearman
value: 86.05402451477147
- type: manhattan_pearson
value: 85.46278563858236
- type: manhattan_spearman
value: 86.08079590861004
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.20623089568763
- type: cos_sim_spearman
value: 81.53786907061009
- type: euclidean_pearson
value: 82.82272250091494
- type: euclidean_spearman
value: 81.53786907061009
- type: manhattan_pearson
value: 82.78850494027013
- type: manhattan_spearman
value: 81.5135618083407
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.46366618397936
- type: cos_sim_spearman
value: 86.96566013336908
- type: euclidean_pearson
value: 86.62651697548931
- type: euclidean_spearman
value: 86.96565526364454
- type: manhattan_pearson
value: 86.58812160258009
- type: manhattan_spearman
value: 86.9336484321288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.51858358641559
- type: cos_sim_spearman
value: 84.7652527954999
- type: euclidean_pearson
value: 84.23914783766861
- type: euclidean_spearman
value: 84.7652527954999
- type: manhattan_pearson
value: 84.22749648503171
- type: manhattan_spearman
value: 84.74527996746386
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28026563313065
- type: cos_sim_spearman
value: 87.46928143824915
- type: euclidean_pearson
value: 88.30558762000372
- type: euclidean_spearman
value: 87.46928143824915
- type: manhattan_pearson
value: 88.10513330809331
- type: manhattan_spearman
value: 87.21069787834173
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.376497134587375
- type: cos_sim_spearman
value: 65.0159550112516
- type: euclidean_pearson
value: 65.64572120879598
- type: euclidean_spearman
value: 65.0159550112516
- type: manhattan_pearson
value: 65.88143604989976
- type: manhattan_spearman
value: 65.17547297222434
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.22876368947644
- type: cos_sim_spearman
value: 85.46935577445318
- type: euclidean_pearson
value: 85.32830231392005
- type: euclidean_spearman
value: 85.46935577445318
- type: manhattan_pearson
value: 85.30353211758495
- type: manhattan_spearman
value: 85.42821085956945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.60986667767133
- type: mrr
value: 94.29432314236236
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.528
- type: map_at_10
value: 65.187
- type: map_at_100
value: 65.62599999999999
- type: map_at_1000
value: 65.657
- type: map_at_3
value: 62.352
- type: map_at_5
value: 64.025
- type: mrr_at_1
value: 57.333
- type: mrr_at_10
value: 66.577
- type: mrr_at_100
value: 66.88
- type: mrr_at_1000
value: 66.908
- type: mrr_at_3
value: 64.556
- type: mrr_at_5
value: 65.739
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 70.275
- type: ndcg_at_100
value: 72.136
- type: ndcg_at_1000
value: 72.963
- type: ndcg_at_3
value: 65.414
- type: ndcg_at_5
value: 67.831
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.778000000000002
- type: precision_at_5
value: 17.2
- type: recall_at_1
value: 54.528
- type: recall_at_10
value: 84.356
- type: recall_at_100
value: 92.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.283
- type: recall_at_5
value: 77.14999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74158415841585
- type: cos_sim_ap
value: 92.90048959850317
- type: cos_sim_f1
value: 86.35650810245687
- type: cos_sim_precision
value: 90.4709748083242
- type: cos_sim_recall
value: 82.6
- type: dot_accuracy
value: 99.74158415841585
- type: dot_ap
value: 92.90048959850317
- type: dot_f1
value: 86.35650810245687
- type: dot_precision
value: 90.4709748083242
- type: dot_recall
value: 82.6
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.90048959850317
- type: euclidean_f1
value: 86.35650810245687
- type: euclidean_precision
value: 90.4709748083242
- type: euclidean_recall
value: 82.6
- type: manhattan_accuracy
value: 99.74158415841585
- type: manhattan_ap
value: 92.87344692947894
- type: manhattan_f1
value: 86.38497652582159
- type: manhattan_precision
value: 90.29443838604145
- type: manhattan_recall
value: 82.8
- type: max_accuracy
value: 99.74158415841585
- type: max_ap
value: 92.90048959850317
- type: max_f1
value: 86.38497652582159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.191648770424216
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.02944668730218
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.466386167525265
- type: mrr
value: 51.19071492233257
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.198022505886435
- type: cos_sim_spearman
value: 30.40170257939193
- type: dot_pearson
value: 30.198015316402614
- type: dot_spearman
value: 30.40170257939193
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.242
- type: map_at_10
value: 2.17
- type: map_at_100
value: 12.221
- type: map_at_1000
value: 28.63
- type: map_at_3
value: 0.728
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 89
- type: ndcg_at_10
value: 82.30499999999999
- type: ndcg_at_100
value: 61.839999999999996
- type: ndcg_at_1000
value: 53.381
- type: ndcg_at_3
value: 88.877
- type: ndcg_at_5
value: 86.05199999999999
- type: precision_at_1
value: 94
- type: precision_at_10
value: 87
- type: precision_at_100
value: 63.38
- type: precision_at_1000
value: 23.498
- type: precision_at_3
value: 94
- type: precision_at_5
value: 92
- type: recall_at_1
value: 0.242
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 14.979000000000001
- type: recall_at_1000
value: 49.638
- type: recall_at_3
value: 0.753
- type: recall_at_5
value: 1.226
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.006
- type: map_at_10
value: 11.805
- type: map_at_100
value: 18.146
- type: map_at_1000
value: 19.788
- type: map_at_3
value: 5.914
- type: map_at_5
value: 8.801
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 56.36600000000001
- type: mrr_at_100
value: 56.721999999999994
- type: mrr_at_1000
value: 56.721999999999994
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 54.796
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 29.863
- type: ndcg_at_100
value: 39.571
- type: ndcg_at_1000
value: 51.385999999999996
- type: ndcg_at_3
value: 32.578
- type: ndcg_at_5
value: 32.351
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 7.796
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.006
- type: recall_at_10
value: 18.738
- type: recall_at_100
value: 48.058
- type: recall_at_1000
value: 83.41300000000001
- type: recall_at_3
value: 7.166
- type: recall_at_5
value: 12.102
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4178
- type: ap
value: 14.648781342150446
- type: f1
value: 55.07299194946378
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.919637804187886
- type: f1
value: 61.24122013967399
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.207896583685695
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.23114978840078
- type: cos_sim_ap
value: 74.26624727825818
- type: cos_sim_f1
value: 68.72377190817083
- type: cos_sim_precision
value: 64.56400742115028
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.23114978840078
- type: dot_ap
value: 74.26624032659652
- type: dot_f1
value: 68.72377190817083
- type: dot_precision
value: 64.56400742115028
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.23114978840078
- type: euclidean_ap
value: 74.26624714480556
- type: euclidean_f1
value: 68.72377190817083
- type: euclidean_precision
value: 64.56400742115028
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.16558383501221
- type: manhattan_ap
value: 74.2091943976357
- type: manhattan_f1
value: 68.64221520524654
- type: manhattan_precision
value: 63.59135913591359
- type: manhattan_recall
value: 74.5646437994723
- type: max_accuracy
value: 86.23114978840078
- type: max_ap
value: 74.26624727825818
- type: max_f1
value: 68.72377190817083
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.3681841114604
- type: cos_sim_ap
value: 86.65166387498546
- type: cos_sim_f1
value: 79.02581944698774
- type: cos_sim_precision
value: 75.35796605434099
- type: cos_sim_recall
value: 83.06898675700647
- type: dot_accuracy
value: 89.3681841114604
- type: dot_ap
value: 86.65166019802056
- type: dot_f1
value: 79.02581944698774
- type: dot_precision
value: 75.35796605434099
- type: dot_recall
value: 83.06898675700647
- type: euclidean_accuracy
value: 89.3681841114604
- type: euclidean_ap
value: 86.65166462876266
- type: euclidean_f1
value: 79.02581944698774
- type: euclidean_precision
value: 75.35796605434099
- type: euclidean_recall
value: 83.06898675700647
- type: manhattan_accuracy
value: 89.36624364497226
- type: manhattan_ap
value: 86.65076471274106
- type: manhattan_f1
value: 79.07408783532733
- type: manhattan_precision
value: 76.41102972856527
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.3681841114604
- type: max_ap
value: 86.65166462876266
- type: max_f1
value: 79.07408783532733
---
# BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF
This model was converted to GGUF format from [`nomic-ai/nomic-embed-text-v1.5`](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo BenevolenceMessiah/nomic-embed-text-v1.5-Q8_0-GGUF --hf-file nomic-embed-text-v1.5-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
HPAI-BSC/Qwen2.5-Aloe-Beta-72B | HPAI-BSC | question-answering | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"biology",
"medical",
"healthcare",
"question-answering",
"en",
"dataset:HPAI-BSC/Aloe-Beta-General-Collection",
"dataset:HPAI-BSC/chain-of-diagnosis",
"dataset:HPAI-BSC/MedS-Ins",
"dataset:HPAI-BSC/ultramedical",
"dataset:HPAI-BSC/pubmedqa-cot-llama31",
"dataset:HPAI-BSC/medqa-cot-llama31",
"dataset:HPAI-BSC/medmcqa-cot-llama31",
"dataset:HPAI-BSC/headqa-cot-llama31",
"dataset:HPAI-BSC/MMLU-medical-cot-llama31",
"dataset:HPAI-BSC/Polymed-QA",
"arxiv:2405.01886",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-09T15:25:06 | 2025-01-22T14:21:44 | 43 | 9 | ---
datasets:
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/chain-of-diagnosis
- HPAI-BSC/MedS-Ins
- HPAI-BSC/ultramedical
- HPAI-BSC/pubmedqa-cot-llama31
- HPAI-BSC/medqa-cot-llama31
- HPAI-BSC/medmcqa-cot-llama31
- HPAI-BSC/headqa-cot-llama31
- HPAI-BSC/MMLU-medical-cot-llama31
- HPAI-BSC/Polymed-QA
- HPAI-BSC/Aloe-Beta-General-Collection
- HPAI-BSC/Aloe-Beta-General-Collection
language:
- en
library_name: transformers
pipeline_tag: question-answering
tags:
- biology
- medical
- healthcare
---
<p align="center">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/3_lyx8rP6VuhXN8YRaZDS.png">
<img alt="aloe_beta_7b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/3_lyx8rP6VuhXN8YRaZDS.png" width=50%>
</picture>
</p>
<h1 align="center">
Aloe: A Family of Fine-tuned Open Healthcare LLMs
</h1>
---
Qwen2.5-Aloe-Beta-72B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in four model sizes: [7B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-7B/), [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B), [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B), and [72B](https://huggingface.co/HPAI-BSC/Qwen2.5-Aloe-Beta-72B). All models are trained using the same recipe, on top of two different families of models: Llama3.1 and Qwen2.5.
Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 7B and 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Llama3.1-Aloe-Beta-70B and Qwen2.5-Aloe-Beta-72B outperforms those private alternatives, producing state-of-the-art results.
# Aloe-Beta-72B

**Aloe-Beta** is the latest iteration in the **Aloe family**, building and improving on the success of its predecessor, [Aloe-8B-Alpha](https://huggingface.co/HPAI-BSC/Llama3-Aloe-8B-Alpha).
Beta more than triples the training data used by Alpha, for a total of **1.8B tokens**, including a wider variety of medical tasks and instructions (e.g., text summarization, explanation, diagnosis, text classification, treatment recommendation, ...).

To mitigate catastrophic forgetting and enable the model to effectively learn new capabilities like **function calling**, we incorporated a diverse set of high-quality general-purpose data constituting 20% of the total training set. The curated data includes some of the highest-quality content available across a range of topics, including mathematics, programming, STEM, and very long instructions (> 8k tokens), to enrich the model's adaptability and comprehension across diverse domains.
Beta also boosts the alignment and safety stages with respect to Alpha. This includes a [medical preference dataset](https://huggingface.co/datasets/TsinghuaC3I/UltraMedical-Preference), as well as the red-teaming dataset (available soon).
Complete training details, model merging configurations, and all training data (including synthetically generated data) can be found below. This includes [the RAG system](https://github.com/HPAI-BSC/prompt_engine) that was developed to test Aloe Beta in a deployment setup. Aloe comes with a healthcare-specific risk assessment to facilitate to the safe use and deployment of such systems.
## Model Details
### [](https://huggingface.co/templates/model-card-example#model-description)Model Description
- **Developed by:** [HPAI](https://hpai.bsc.es/)
- **Model type:** Causal decoder-only transformer language model
- **Language(s) (NLP):** English (capable but not formally evaluated on other languages)
- **License:** This model is based on [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B) which is released with Apache 2.0 license. All our modifications are available with a [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) license, making the Aloe Beta models **compatible with commercial use**.
- **Base model :** [Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B)
- **Paper:** (more coming soon)
- **RAG Repository:** https://github.com/HPAI-BSC/prompt_engine
### [](https://huggingface.co/templates/model-card-example#model-sources-optional)Model Sources [optional]
## Model Performance
Aloe Beta has been tested on the most popular healthcare QA datasets, with and without Medprompt inference technique. Results show competitive performance, achieving SOTA within models of the same size.

The Beta model has been developed to excel in several different medical tasks. For this reason, we evaluated the model in many different medical tasks:


We also compared the performance of the model in the general domain, using the OpenLLM Leaderboard benchmark. Aloe-Beta gets competitive results with the current SOTA general models in the most used general benchmarks and outperforms the medical models:

## Uses
### Direct Use
We encourage the use of Aloe for research purposes, as a stepping stone to build better foundational models for healthcare. In production, Aloe should always be used under the supervision of a human expert.
### Out-of-Scope Use
These models are not to be used for clinical practice, medical diagnosis, or any other form of direct or indirect healthcare advice. Models are prone to error and can produce toxic content. The use of Aloe models for activities harmful to individuals, such as spam, fraud, or impersonation, is strictly prohibited. Minors should not be left alone to interact with Aloe without supervision.
## Bias, Risks, and Limitations
Aloe can produce toxic content under the appropriate prompts, and it includes multiple undesirable biases. While significant efforts where conducted to mitigate this (see Alignment details below), model safety cannot be fully guaranteed. We avoid the use of all personal data in our training.
We identify at least three risk cases specific to healthcare LLMs:
- Healthcare professional impersonation, a fraudulent behaviour which currently generates billions of dollars in [profit](https://www.justice.gov/opa/pr/justice-department-charges-dozens-12-billion-health-care-fraud). A model such as Aloe could be used to increase the efficacy of such deceiving activities, making them more widespread. The main preventive actions are public literacy on the unreliability of digitised information and the importance of medical registration, and legislation enforcing AI-generated content disclaimers.
- Medical decision-making without professional supervision. While this is already an issue in modern societies (eg self-medication) a model such as Aloe, capable of producing high-quality conversational data, can facilitate self-delusion, particularly in the presence of sycophancy. By producing tailored responses, it can also be used to generate actionable answers. Public literacy on the dangers of self-diagnosis is one of the main defenses, together with the introduction of disclaimers and warnings on the models' outputs.
- Access to information on dangerous substances or procedures. While the literature on sensitive content can already be found on different sources (eg libraries, the internet, dark web), LLMs can centralize such access, making it nearly impossible to control the flow of such information. Model alignment can help in that regard, but so far the effects remain insufficient, as jailbreaking methods still overcome it.
<!---
Table below shows the performance of Aloe at several AI safety tasks:
TO BE UPDATED
<img src="https://cdn-uploads.huggingface.co/production/uploads/62972c4979f193515da1d38e/T6Jblpf1kmTkM04K716rM.png" width="95%">
We analyzed the safety and robustness of the model using red teaming techniques. We designed a benchmark using different types of attacks and analyzed the performance of Aloe and some extra models, and we confirm that our model is aligned properly and successfully resisting most attacks:


-->
## How to Get Started with the Model
Use the code below to get started with the model. You can run conversational inference using the Transformers pipeline abstraction, or by leveraging the Auto classes with the `generate()` function. Let's see examples for both.
#### Transformers pipeline
```python
import transformers
import torch
model_id = "HPAI-BSC/Qwen2.5-Aloe-Beta-72B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello."},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|im_end|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.7,
top_p=0.8,
top_k=20,
repetition_penalty=1.05
)
print(outputs[0]["generated_text"][len(prompt):])
```
#### Transformers AutoModelForCausalLM
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "HPAI-BSC/Qwen2.5-Aloe-Beta-72B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named Aloe, developed by the High Performance Artificial Intelligence Group at Barcelona Supercomputing Center(BSC). You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "Hello"},
]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
terminators = [
tokenizer.eos_token_id,
tokenizer.convert_tokens_to_ids("<|im_end|>")
]
outputs = model.generate(
input_ids,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.7,
top_p=0.8,
top_k=20,
repetition_penalty=1.05
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## Training Details
### Supervised fine-tuning
SFT on top of Qwen2.5-72B using axolotl (https://github.com/axolotl-ai-cloud/axolotl).
We used Deepspeed's Zero-3 distributed training using the following hardware:
* 7B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
* 8B: 32x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
* 70B: 64x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
* 72B: 92x NVIDIA Hopper H100 64GB of the *Marenostrum 5*.
<!---
^^^ TO BE COMPLETED AND DETAILED ^^^
-->
#### Training Data
The training set consists of around 1.8B tokens, having 3 different types of data:
- Medical domain datasets. Includes data from 20 different medical tasks.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
- [HPAI-BSC/chain-of-diagnosis](https://huggingface.co/datasets/HPAI-BSC/chain-of-diagnosis)
- [HPAI-BSC/MedS-Ins](https://huggingface.co/datasets/HPAI-BSC/MedS-Ins)
- [HPAI-BSC/ultramedica](https://huggingface.co/datasets/HPAI-BSC/ultramedical)
- Synthetic data. We expanded our training data by generating high-quality answers using Llama3.1-70B.
- [HPAI-BSC/pubmedqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot-llama31)
- [HPAI-BSC/medqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medqa-cot-llama31)
- [HPAI-BSC/medmcqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot-llama31)
- [HPAI-BSC/headqa-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/headqa-cot-llama31)
- [HPAI-BSC/MMLU-medical-cot-llama31](https://huggingface.co/datasets/HPAI-BSC/MMLU-medical-cot-llama31)
- [HPAI-BSC/Polymed-QA](https://huggingface.co/datasets/HPAI-BSC/Polymed-QA)
- Genstruct data (coming soon)
- General data. It includes maths, STEM, code, function calling, and instructions with a very long context.
- [HPAI-BSC/Aloe-Beta-General-Collection](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-General-Collection)
#### Training parameters
- Epochs: 3
- Sequence length: 16384
- Optimizer: adamw_torch
- Learning rate: 1e-5
- Learning rate scheduler: cosine
- Warmup steps: 100
- Weight decay: 0
- Gradient checkpointing
- Zero 3
- Total batch size: 128
- Batch size per device: 1
- Gradient accumulation steps: 4
### Model Merging
The model trained was merged with the Qwen2.5-72-Instruct model using the DARE_TIES technique. [Mergekit](https://github.com/arcee-ai/mergekit) was used to conduct the merging.
### Model Alignment
The model is aligned using the Direct Preference Optimization (DPO) technique through a two-step process:
1. General DPO Alignment: This step uses a dataset combining medical, general preference, and safety data. We used our dataset [HPAI-BSC/Aloe-Beta-DPO](https://huggingface.co/datasets/HPAI-BSC/Aloe-Beta-DPO). We split the dataset into five parts, and the model was trained iteratively for one epoch on each chunk. We used a learning rate of 2e-7.
2. Red-Teaming Alignment: This step further fine-tunes the model to resist a variety of potential attacks, enhancing its robustness and security. Dataset will be shared soon. In this stage, we set the learning rate to 1e-7.
<!---
^^^ LINKS TO DPO DATA (DPO added, missing the RT^^^
-->
We used [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF) library. We aligned the model using 16x NVIDA HOOPER H100 64GB of the *Marenostrum 5*. Common hyperparameters:
- Sequence length: 4096
- Optimizer: Fused adam
- Total batch size 128
- Batch size per device: 1
- Gradient accumulation steps: 8
- Beta: 0.1
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- [ACI-BENCH](https://github.com/wyim/aci-bench)
- [MTS-Dialog](https://github.com/abachaa/MTS-Dialog)
- [MedText](https://huggingface.co/datasets/BI55/MedText)
- [Medical Text classification](https://www.kaggle.com/datasets/chaitanyakck/medical-text/data)
- [OLAPH](https://github.com/dmis-lab/OLAPH)
- CareQA Open
- [MedDialog](https://huggingface.co/datasets/bigbio/meddialog)
- [MEDIQA QA](https://huggingface.co/datasets/bigbio/mediqa_qa)
- [Meddialog Qsumm](https://huggingface.co/datasets/lighteval/med_dialog)
- [Biored](https://huggingface.co/datasets/YufeiHFUT/BioRED_all_info)
- [MIMIC-III](https://huggingface.co/datasets/dmacres/mimiciii-hospitalcourse-meta)
- [Medical Prescription](https://huggingface.co/datasets/devlocalhost/prescription-full)
- [MedQA (USMLE)](https://huggingface.co/datasets/bigbio/med_qa)
- [MedMCQA](https://huggingface.co/datasets/medmcqa)
- [PubMedQA](https://huggingface.co/datasets/bigbio/pubmed_qa)
- [MMLU-Medical](https://huggingface.co/datasets/lukaemon/mmlu)
- [MedQA-4-Option](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options)
- [CareQA](https://huggingface.co/datasets/HPAI-BSC/CareQA)
- [Open LLM Leaderboard 2](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
<!---
^^^ CAREQA Open link MISSING ^^^
-->
#### Metrics
- Accuracy: suite the evaluation of multiple-choice question-answering tasks.
- Rouge1: refers to the overlap of unigrams between the system and the gold standard.
<!---
^^^ MORE METRICS MISSING ^^^
-->
#### Summary
To compare Aloe with the most competitive open models (both general purpose and healthcare-specific) we use popular healthcare datasets (PubMedQA, MedMCQA, MedQA and MMLU for six medical tasks only), together with the new and highly reliable CareQA. However, while MCQA benchmarks provide valuable insights into a model's ability to handle structured queries, they fall short of representing the full range of challenges faced in medical practice. Building upon this idea, Aloe-Beta represents the next step in the evolution of the Aloe Family, designed to broaden the scope beyond the multiple-choice question-answering tasks that define Aloe-Alpha.
Benchmark results indicate the training conducted on Aloe has boosted its performance achieving comparable results with SOTA models like Llama3-OpenBioLLLM, Llama3-Med42, MedPalm-2 and GPT-4. Llama3.1-Aloe-Beta-70B also outperforms the other existing medical models in the OpenLLM Leaderboard and in the evaluation of other medical tasks like Medical Factualy and Medical Treatment recommendations among others. All these results make Llama3.1-Aloe-Beta-70B one of the best existing models for healthcare.
Benchmark results indicate the training conducted on Qwen2.5-Aloe-Beta-72B has boosted its performance, outperforming all the existing public and private models in the medical MCQA benchmarks. In addition, the model is outperforming in the evaluation of other medical tasks like Medical Factualy and Medical Treatment recommendations among others.
With the help of prompting techniques the performance of Aloe is significantly improved. Medprompting in particular provides a 4% increase in reported accuracy, after which Qwen2.5-Aloe-Beta-72B outperforms all the existing models that do not use RAG evaluation.
## Environmental Impact
- **Hardware Type:** 32xH100
- **Hours used (8B):** 544 GPU hours
- **Hours used (70B):** 4500 GPU hours
- **Hardware Provider:** Barcelona Supercomputing Center (BSC)
- **Compute Region:** Spain
- **Carbon Emitted:** 34.1 kg of CO2
<!---
^^^ ARE CARBON EMISSIONS FOR BOTH? ^^^
-->
## Authors
Aloe Beta has been developed by the [High Performance Artificial Intelligence](https://hpai.bsc.es/) research group, from the [Barcelona Supercomping Center - BSC](https://www.bsc.es/). Main authors are [Jordi Bayarri Planas](https://huggingface.co/JordiBayarri), [Ashwin Kumar Gururajan](https://huggingface.co/G-AshwinKumar) and [Dario Garcia-Gasulla](https://huggingface.co/dariog). Red teaming efforts lead by Adrian Tormos.
mailto:[email protected]
## Citations
<!---
Add the prompt engine paper below
-->
If you use this repository in a published work, please cite the corresponding papers as source:
```
@misc{gururajan2024aloe,
title={Aloe: A Family of Fine-tuned Open Healthcare LLMs},
author={Ashwin Kumar Gururajan and Enrique Lopez-Cuena and Jordi Bayarri-Planas and Adrian Tormos and Daniel Hinjos and Pablo Bernabeu-Perez and Anna Arias-Duart and Pablo Agustin Martin-Torres and Lucia Urcelay-Ganzabal and Marta Gonzalez-Mallo and Sergio Alvarez-Napagao and Eduard Ayguadé-Parra and Ulises Cortés Dario Garcia-Gasulla},
year={2024},
eprint={2405.01886},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | [
"BIORED",
"MEDIQA QA",
"MEDDIALOG",
"MEDQA",
"PUBMEDQA"
] |
ggml-org/jina-embeddings-v2-base-en-Q8_0-GGUF | ggml-org | feature-extraction | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:allenai/c4",
"base_model:jinaai/jina-embeddings-v2-base-en",
"base_model:quantized:jinaai/jina-embeddings-v2-base-en",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 2024-12-12T17:29:44 | 2024-12-12T17:29:47 | 43 | 1 | ---
base_model: jinaai/jina-embeddings-v2-base-en
datasets:
- allenai/c4
language: en
license: apache-2.0
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- llama-cpp
- gguf-my-repo
inference: false
model-index:
- name: jina-embedding-b-en-v2
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.73134328358209
- type: ap
value: 37.765427081831035
- type: f1
value: 68.79367444339518
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 88.544275
- type: ap
value: 84.61328675662887
- type: f1
value: 88.51879035862375
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.263999999999996
- type: f1
value: 43.778759656699435
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.693
- type: map_at_10
value: 35.487
- type: map_at_100
value: 36.862
- type: map_at_1000
value: 36.872
- type: map_at_3
value: 30.049999999999997
- type: map_at_5
value: 32.966
- type: mrr_at_1
value: 21.977
- type: mrr_at_10
value: 35.565999999999995
- type: mrr_at_100
value: 36.948
- type: mrr_at_1000
value: 36.958
- type: mrr_at_3
value: 30.121
- type: mrr_at_5
value: 33.051
- type: ndcg_at_1
value: 21.693
- type: ndcg_at_10
value: 44.181
- type: ndcg_at_100
value: 49.982
- type: ndcg_at_1000
value: 50.233000000000004
- type: ndcg_at_3
value: 32.830999999999996
- type: ndcg_at_5
value: 38.080000000000005
- type: precision_at_1
value: 21.693
- type: precision_at_10
value: 7.248
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 13.632
- type: precision_at_5
value: 10.725
- type: recall_at_1
value: 21.693
- type: recall_at_10
value: 72.475
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 40.896
- type: recall_at_5
value: 53.627
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.39242428696777
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.675626784714
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.247725694904034
- type: mrr
value: 74.91359978894604
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 82.68003802970496
- type: cos_sim_spearman
value: 81.23438110096286
- type: euclidean_pearson
value: 81.87462986142582
- type: euclidean_spearman
value: 81.23438110096286
- type: manhattan_pearson
value: 81.61162566600755
- type: manhattan_spearman
value: 81.11329400456184
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.01298701298701
- type: f1
value: 83.31690714969382
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.050108150972086
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 30.15731442819715
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.391999999999996
- type: map_at_10
value: 42.597
- type: map_at_100
value: 44.07
- type: map_at_1000
value: 44.198
- type: map_at_3
value: 38.957
- type: map_at_5
value: 40.961
- type: mrr_at_1
value: 37.196
- type: mrr_at_10
value: 48.152
- type: mrr_at_100
value: 48.928
- type: mrr_at_1000
value: 48.964999999999996
- type: mrr_at_3
value: 45.446
- type: mrr_at_5
value: 47.205999999999996
- type: ndcg_at_1
value: 37.196
- type: ndcg_at_10
value: 49.089
- type: ndcg_at_100
value: 54.471000000000004
- type: ndcg_at_1000
value: 56.385
- type: ndcg_at_3
value: 43.699
- type: ndcg_at_5
value: 46.22
- type: precision_at_1
value: 37.196
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 20.839
- type: precision_at_5
value: 14.936
- type: recall_at_1
value: 31.391999999999996
- type: recall_at_10
value: 61.876
- type: recall_at_100
value: 84.214
- type: recall_at_1000
value: 95.985
- type: recall_at_3
value: 46.6
- type: recall_at_5
value: 53.588
- type: map_at_1
value: 29.083
- type: map_at_10
value: 38.812999999999995
- type: map_at_100
value: 40.053
- type: map_at_1000
value: 40.188
- type: map_at_3
value: 36.111
- type: map_at_5
value: 37.519000000000005
- type: mrr_at_1
value: 36.497
- type: mrr_at_10
value: 44.85
- type: mrr_at_100
value: 45.546
- type: mrr_at_1000
value: 45.593
- type: mrr_at_3
value: 42.686
- type: mrr_at_5
value: 43.909
- type: ndcg_at_1
value: 36.497
- type: ndcg_at_10
value: 44.443
- type: ndcg_at_100
value: 48.979
- type: ndcg_at_1000
value: 51.154999999999994
- type: ndcg_at_3
value: 40.660000000000004
- type: ndcg_at_5
value: 42.193000000000005
- type: precision_at_1
value: 36.497
- type: precision_at_10
value: 8.433
- type: precision_at_100
value: 1.369
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 19.894000000000002
- type: precision_at_5
value: 13.873
- type: recall_at_1
value: 29.083
- type: recall_at_10
value: 54.313
- type: recall_at_100
value: 73.792
- type: recall_at_1000
value: 87.629
- type: recall_at_3
value: 42.257
- type: recall_at_5
value: 47.066
- type: map_at_1
value: 38.556000000000004
- type: map_at_10
value: 50.698
- type: map_at_100
value: 51.705
- type: map_at_1000
value: 51.768
- type: map_at_3
value: 47.848
- type: map_at_5
value: 49.358000000000004
- type: mrr_at_1
value: 43.95
- type: mrr_at_10
value: 54.191
- type: mrr_at_100
value: 54.852999999999994
- type: mrr_at_1000
value: 54.885
- type: mrr_at_3
value: 51.954
- type: mrr_at_5
value: 53.13
- type: ndcg_at_1
value: 43.95
- type: ndcg_at_10
value: 56.516
- type: ndcg_at_100
value: 60.477000000000004
- type: ndcg_at_1000
value: 61.746
- type: ndcg_at_3
value: 51.601
- type: ndcg_at_5
value: 53.795
- type: precision_at_1
value: 43.95
- type: precision_at_10
value: 9.009
- type: precision_at_100
value: 1.189
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 22.989
- type: precision_at_5
value: 15.473
- type: recall_at_1
value: 38.556000000000004
- type: recall_at_10
value: 70.159
- type: recall_at_100
value: 87.132
- type: recall_at_1000
value: 96.16
- type: recall_at_3
value: 56.906
- type: recall_at_5
value: 62.332
- type: map_at_1
value: 24.238
- type: map_at_10
value: 32.5
- type: map_at_100
value: 33.637
- type: map_at_1000
value: 33.719
- type: map_at_3
value: 30.026999999999997
- type: map_at_5
value: 31.555
- type: mrr_at_1
value: 26.328000000000003
- type: mrr_at_10
value: 34.44
- type: mrr_at_100
value: 35.455999999999996
- type: mrr_at_1000
value: 35.521
- type: mrr_at_3
value: 32.034
- type: mrr_at_5
value: 33.565
- type: ndcg_at_1
value: 26.328000000000003
- type: ndcg_at_10
value: 37.202
- type: ndcg_at_100
value: 42.728
- type: ndcg_at_1000
value: 44.792
- type: ndcg_at_3
value: 32.368
- type: ndcg_at_5
value: 35.008
- type: precision_at_1
value: 26.328000000000003
- type: precision_at_10
value: 5.7059999999999995
- type: precision_at_100
value: 0.8880000000000001
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 13.672
- type: precision_at_5
value: 9.74
- type: recall_at_1
value: 24.238
- type: recall_at_10
value: 49.829
- type: recall_at_100
value: 75.21
- type: recall_at_1000
value: 90.521
- type: recall_at_3
value: 36.867
- type: recall_at_5
value: 43.241
- type: map_at_1
value: 15.378
- type: map_at_10
value: 22.817999999999998
- type: map_at_100
value: 23.977999999999998
- type: map_at_1000
value: 24.108
- type: map_at_3
value: 20.719
- type: map_at_5
value: 21.889
- type: mrr_at_1
value: 19.03
- type: mrr_at_10
value: 27.022000000000002
- type: mrr_at_100
value: 28.011999999999997
- type: mrr_at_1000
value: 28.096
- type: mrr_at_3
value: 24.855
- type: mrr_at_5
value: 26.029999999999998
- type: ndcg_at_1
value: 19.03
- type: ndcg_at_10
value: 27.526
- type: ndcg_at_100
value: 33.040000000000006
- type: ndcg_at_1000
value: 36.187000000000005
- type: ndcg_at_3
value: 23.497
- type: ndcg_at_5
value: 25.334
- type: precision_at_1
value: 19.03
- type: precision_at_10
value: 4.963
- type: precision_at_100
value: 0.893
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.360000000000001
- type: precision_at_5
value: 8.134
- type: recall_at_1
value: 15.378
- type: recall_at_10
value: 38.061
- type: recall_at_100
value: 61.754
- type: recall_at_1000
value: 84.259
- type: recall_at_3
value: 26.788
- type: recall_at_5
value: 31.326999999999998
- type: map_at_1
value: 27.511999999999997
- type: map_at_10
value: 37.429
- type: map_at_100
value: 38.818000000000005
- type: map_at_1000
value: 38.924
- type: map_at_3
value: 34.625
- type: map_at_5
value: 36.064
- type: mrr_at_1
value: 33.300999999999995
- type: mrr_at_10
value: 43.036
- type: mrr_at_100
value: 43.894
- type: mrr_at_1000
value: 43.936
- type: mrr_at_3
value: 40.825
- type: mrr_at_5
value: 42.028
- type: ndcg_at_1
value: 33.300999999999995
- type: ndcg_at_10
value: 43.229
- type: ndcg_at_100
value: 48.992000000000004
- type: ndcg_at_1000
value: 51.02100000000001
- type: ndcg_at_3
value: 38.794000000000004
- type: ndcg_at_5
value: 40.65
- type: precision_at_1
value: 33.300999999999995
- type: precision_at_10
value: 7.777000000000001
- type: precision_at_100
value: 1.269
- type: precision_at_1000
value: 0.163
- type: precision_at_3
value: 18.351
- type: precision_at_5
value: 12.762
- type: recall_at_1
value: 27.511999999999997
- type: recall_at_10
value: 54.788000000000004
- type: recall_at_100
value: 79.105
- type: recall_at_1000
value: 92.49199999999999
- type: recall_at_3
value: 41.924
- type: recall_at_5
value: 47.026
- type: map_at_1
value: 24.117
- type: map_at_10
value: 33.32
- type: map_at_100
value: 34.677
- type: map_at_1000
value: 34.78
- type: map_at_3
value: 30.233999999999998
- type: map_at_5
value: 31.668000000000003
- type: mrr_at_1
value: 29.566
- type: mrr_at_10
value: 38.244
- type: mrr_at_100
value: 39.245000000000005
- type: mrr_at_1000
value: 39.296
- type: mrr_at_3
value: 35.864000000000004
- type: mrr_at_5
value: 36.919999999999995
- type: ndcg_at_1
value: 29.566
- type: ndcg_at_10
value: 39.127
- type: ndcg_at_100
value: 44.989000000000004
- type: ndcg_at_1000
value: 47.189
- type: ndcg_at_3
value: 34.039
- type: ndcg_at_5
value: 35.744
- type: precision_at_1
value: 29.566
- type: precision_at_10
value: 7.385999999999999
- type: precision_at_100
value: 1.204
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 16.286
- type: precision_at_5
value: 11.484
- type: recall_at_1
value: 24.117
- type: recall_at_10
value: 51.559999999999995
- type: recall_at_100
value: 77.104
- type: recall_at_1000
value: 91.79899999999999
- type: recall_at_3
value: 36.82
- type: recall_at_5
value: 41.453
- type: map_at_1
value: 25.17625
- type: map_at_10
value: 34.063916666666664
- type: map_at_100
value: 35.255500000000005
- type: map_at_1000
value: 35.37275
- type: map_at_3
value: 31.351666666666667
- type: map_at_5
value: 32.80608333333333
- type: mrr_at_1
value: 29.59783333333333
- type: mrr_at_10
value: 38.0925
- type: mrr_at_100
value: 38.957249999999995
- type: mrr_at_1000
value: 39.01608333333333
- type: mrr_at_3
value: 35.77625
- type: mrr_at_5
value: 37.04991666666667
- type: ndcg_at_1
value: 29.59783333333333
- type: ndcg_at_10
value: 39.343666666666664
- type: ndcg_at_100
value: 44.488249999999994
- type: ndcg_at_1000
value: 46.83358333333334
- type: ndcg_at_3
value: 34.69708333333333
- type: ndcg_at_5
value: 36.75075
- type: precision_at_1
value: 29.59783333333333
- type: precision_at_10
value: 6.884083333333332
- type: precision_at_100
value: 1.114
- type: precision_at_1000
value: 0.15108333333333332
- type: precision_at_3
value: 15.965250000000003
- type: precision_at_5
value: 11.246500000000001
- type: recall_at_1
value: 25.17625
- type: recall_at_10
value: 51.015999999999984
- type: recall_at_100
value: 73.60174999999998
- type: recall_at_1000
value: 89.849
- type: recall_at_3
value: 37.88399999999999
- type: recall_at_5
value: 43.24541666666666
- type: map_at_1
value: 24.537
- type: map_at_10
value: 31.081999999999997
- type: map_at_100
value: 32.042
- type: map_at_1000
value: 32.141
- type: map_at_3
value: 29.137
- type: map_at_5
value: 30.079
- type: mrr_at_1
value: 27.454
- type: mrr_at_10
value: 33.694
- type: mrr_at_100
value: 34.579
- type: mrr_at_1000
value: 34.649
- type: mrr_at_3
value: 32.004
- type: mrr_at_5
value: 32.794000000000004
- type: ndcg_at_1
value: 27.454
- type: ndcg_at_10
value: 34.915
- type: ndcg_at_100
value: 39.641
- type: ndcg_at_1000
value: 42.105
- type: ndcg_at_3
value: 31.276
- type: ndcg_at_5
value: 32.65
- type: precision_at_1
value: 27.454
- type: precision_at_10
value: 5.337
- type: precision_at_100
value: 0.8250000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 13.241
- type: precision_at_5
value: 8.895999999999999
- type: recall_at_1
value: 24.537
- type: recall_at_10
value: 44.324999999999996
- type: recall_at_100
value: 65.949
- type: recall_at_1000
value: 84.017
- type: recall_at_3
value: 33.857
- type: recall_at_5
value: 37.316
- type: map_at_1
value: 17.122
- type: map_at_10
value: 24.32
- type: map_at_100
value: 25.338
- type: map_at_1000
value: 25.462
- type: map_at_3
value: 22.064
- type: map_at_5
value: 23.322000000000003
- type: mrr_at_1
value: 20.647
- type: mrr_at_10
value: 27.858
- type: mrr_at_100
value: 28.743999999999996
- type: mrr_at_1000
value: 28.819
- type: mrr_at_3
value: 25.769
- type: mrr_at_5
value: 26.964
- type: ndcg_at_1
value: 20.647
- type: ndcg_at_10
value: 28.849999999999998
- type: ndcg_at_100
value: 33.849000000000004
- type: ndcg_at_1000
value: 36.802
- type: ndcg_at_3
value: 24.799
- type: ndcg_at_5
value: 26.682
- type: precision_at_1
value: 20.647
- type: precision_at_10
value: 5.2170000000000005
- type: precision_at_100
value: 0.906
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 11.769
- type: precision_at_5
value: 8.486
- type: recall_at_1
value: 17.122
- type: recall_at_10
value: 38.999
- type: recall_at_100
value: 61.467000000000006
- type: recall_at_1000
value: 82.716
- type: recall_at_3
value: 27.601
- type: recall_at_5
value: 32.471
- type: map_at_1
value: 24.396
- type: map_at_10
value: 33.415
- type: map_at_100
value: 34.521
- type: map_at_1000
value: 34.631
- type: map_at_3
value: 30.703999999999997
- type: map_at_5
value: 32.166
- type: mrr_at_1
value: 28.825
- type: mrr_at_10
value: 37.397000000000006
- type: mrr_at_100
value: 38.286
- type: mrr_at_1000
value: 38.346000000000004
- type: mrr_at_3
value: 35.028
- type: mrr_at_5
value: 36.32
- type: ndcg_at_1
value: 28.825
- type: ndcg_at_10
value: 38.656
- type: ndcg_at_100
value: 43.856
- type: ndcg_at_1000
value: 46.31
- type: ndcg_at_3
value: 33.793
- type: ndcg_at_5
value: 35.909
- type: precision_at_1
value: 28.825
- type: precision_at_10
value: 6.567
- type: precision_at_100
value: 1.0330000000000001
- type: precision_at_1000
value: 0.135
- type: precision_at_3
value: 15.516
- type: precision_at_5
value: 10.914
- type: recall_at_1
value: 24.396
- type: recall_at_10
value: 50.747
- type: recall_at_100
value: 73.477
- type: recall_at_1000
value: 90.801
- type: recall_at_3
value: 37.1
- type: recall_at_5
value: 42.589
- type: map_at_1
value: 25.072
- type: map_at_10
value: 34.307
- type: map_at_100
value: 35.725
- type: map_at_1000
value: 35.943999999999996
- type: map_at_3
value: 30.906
- type: map_at_5
value: 32.818000000000005
- type: mrr_at_1
value: 29.644
- type: mrr_at_10
value: 38.673
- type: mrr_at_100
value: 39.459
- type: mrr_at_1000
value: 39.527
- type: mrr_at_3
value: 35.771
- type: mrr_at_5
value: 37.332
- type: ndcg_at_1
value: 29.644
- type: ndcg_at_10
value: 40.548
- type: ndcg_at_100
value: 45.678999999999995
- type: ndcg_at_1000
value: 48.488
- type: ndcg_at_3
value: 34.887
- type: ndcg_at_5
value: 37.543
- type: precision_at_1
value: 29.644
- type: precision_at_10
value: 7.688000000000001
- type: precision_at_100
value: 1.482
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 16.206
- type: precision_at_5
value: 12.016
- type: recall_at_1
value: 25.072
- type: recall_at_10
value: 53.478
- type: recall_at_100
value: 76.07300000000001
- type: recall_at_1000
value: 93.884
- type: recall_at_3
value: 37.583
- type: recall_at_5
value: 44.464
- type: map_at_1
value: 20.712
- type: map_at_10
value: 27.467999999999996
- type: map_at_100
value: 28.502
- type: map_at_1000
value: 28.610000000000003
- type: map_at_3
value: 24.887999999999998
- type: map_at_5
value: 26.273999999999997
- type: mrr_at_1
value: 22.736
- type: mrr_at_10
value: 29.553
- type: mrr_at_100
value: 30.485
- type: mrr_at_1000
value: 30.56
- type: mrr_at_3
value: 27.078999999999997
- type: mrr_at_5
value: 28.401
- type: ndcg_at_1
value: 22.736
- type: ndcg_at_10
value: 32.023
- type: ndcg_at_100
value: 37.158
- type: ndcg_at_1000
value: 39.823
- type: ndcg_at_3
value: 26.951999999999998
- type: ndcg_at_5
value: 29.281000000000002
- type: precision_at_1
value: 22.736
- type: precision_at_10
value: 5.213
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 11.459999999999999
- type: precision_at_5
value: 8.244
- type: recall_at_1
value: 20.712
- type: recall_at_10
value: 44.057
- type: recall_at_100
value: 67.944
- type: recall_at_1000
value: 87.925
- type: recall_at_3
value: 30.305
- type: recall_at_5
value: 36.071999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.181999999999999
- type: map_at_10
value: 16.66
- type: map_at_100
value: 18.273
- type: map_at_1000
value: 18.45
- type: map_at_3
value: 14.141
- type: map_at_5
value: 15.455
- type: mrr_at_1
value: 22.15
- type: mrr_at_10
value: 32.062000000000005
- type: mrr_at_100
value: 33.116
- type: mrr_at_1000
value: 33.168
- type: mrr_at_3
value: 28.827
- type: mrr_at_5
value: 30.892999999999997
- type: ndcg_at_1
value: 22.15
- type: ndcg_at_10
value: 23.532
- type: ndcg_at_100
value: 30.358
- type: ndcg_at_1000
value: 33.783
- type: ndcg_at_3
value: 19.222
- type: ndcg_at_5
value: 20.919999999999998
- type: precision_at_1
value: 22.15
- type: precision_at_10
value: 7.185999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 13.941
- type: precision_at_5
value: 10.906
- type: recall_at_1
value: 10.181999999999999
- type: recall_at_10
value: 28.104000000000003
- type: recall_at_100
value: 51.998999999999995
- type: recall_at_1000
value: 71.311
- type: recall_at_3
value: 17.698
- type: recall_at_5
value: 22.262999999999998
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.669
- type: map_at_10
value: 15.552
- type: map_at_100
value: 21.865000000000002
- type: map_at_1000
value: 23.268
- type: map_at_3
value: 11.309
- type: map_at_5
value: 13.084000000000001
- type: mrr_at_1
value: 55.50000000000001
- type: mrr_at_10
value: 66.46600000000001
- type: mrr_at_100
value: 66.944
- type: mrr_at_1000
value: 66.956
- type: mrr_at_3
value: 64.542
- type: mrr_at_5
value: 65.717
- type: ndcg_at_1
value: 44.75
- type: ndcg_at_10
value: 35.049
- type: ndcg_at_100
value: 39.073
- type: ndcg_at_1000
value: 46.208
- type: ndcg_at_3
value: 39.525
- type: ndcg_at_5
value: 37.156
- type: precision_at_1
value: 55.50000000000001
- type: precision_at_10
value: 27.800000000000004
- type: precision_at_100
value: 9.013
- type: precision_at_1000
value: 1.8800000000000001
- type: precision_at_3
value: 42.667
- type: precision_at_5
value: 36.0
- type: recall_at_1
value: 6.669
- type: recall_at_10
value: 21.811
- type: recall_at_100
value: 45.112
- type: recall_at_1000
value: 67.806
- type: recall_at_3
value: 13.373
- type: recall_at_5
value: 16.615
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 48.769999999999996
- type: f1
value: 42.91448356376592
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.013
- type: map_at_10
value: 66.239
- type: map_at_100
value: 66.62599999999999
- type: map_at_1000
value: 66.644
- type: map_at_3
value: 63.965
- type: map_at_5
value: 65.45400000000001
- type: mrr_at_1
value: 58.221000000000004
- type: mrr_at_10
value: 70.43700000000001
- type: mrr_at_100
value: 70.744
- type: mrr_at_1000
value: 70.75099999999999
- type: mrr_at_3
value: 68.284
- type: mrr_at_5
value: 69.721
- type: ndcg_at_1
value: 58.221000000000004
- type: ndcg_at_10
value: 72.327
- type: ndcg_at_100
value: 73.953
- type: ndcg_at_1000
value: 74.312
- type: ndcg_at_3
value: 68.062
- type: ndcg_at_5
value: 70.56400000000001
- type: precision_at_1
value: 58.221000000000004
- type: precision_at_10
value: 9.521
- type: precision_at_100
value: 1.045
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 27.348
- type: precision_at_5
value: 17.794999999999998
- type: recall_at_1
value: 54.013
- type: recall_at_10
value: 86.957
- type: recall_at_100
value: 93.911
- type: recall_at_1000
value: 96.38
- type: recall_at_3
value: 75.555
- type: recall_at_5
value: 81.671
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 21.254
- type: map_at_10
value: 33.723
- type: map_at_100
value: 35.574
- type: map_at_1000
value: 35.730000000000004
- type: map_at_3
value: 29.473
- type: map_at_5
value: 31.543
- type: mrr_at_1
value: 41.358
- type: mrr_at_10
value: 49.498
- type: mrr_at_100
value: 50.275999999999996
- type: mrr_at_1000
value: 50.308
- type: mrr_at_3
value: 47.016000000000005
- type: mrr_at_5
value: 48.336
- type: ndcg_at_1
value: 41.358
- type: ndcg_at_10
value: 41.579
- type: ndcg_at_100
value: 48.455
- type: ndcg_at_1000
value: 51.165000000000006
- type: ndcg_at_3
value: 37.681
- type: ndcg_at_5
value: 38.49
- type: precision_at_1
value: 41.358
- type: precision_at_10
value: 11.543000000000001
- type: precision_at_100
value: 1.87
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.743000000000002
- type: precision_at_5
value: 17.994
- type: recall_at_1
value: 21.254
- type: recall_at_10
value: 48.698
- type: recall_at_100
value: 74.588
- type: recall_at_1000
value: 91.00200000000001
- type: recall_at_3
value: 33.939
- type: recall_at_5
value: 39.367000000000004
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.922
- type: map_at_10
value: 52.32599999999999
- type: map_at_100
value: 53.18000000000001
- type: map_at_1000
value: 53.245
- type: map_at_3
value: 49.294
- type: map_at_5
value: 51.202999999999996
- type: mrr_at_1
value: 71.843
- type: mrr_at_10
value: 78.24600000000001
- type: mrr_at_100
value: 78.515
- type: mrr_at_1000
value: 78.527
- type: mrr_at_3
value: 77.17500000000001
- type: mrr_at_5
value: 77.852
- type: ndcg_at_1
value: 71.843
- type: ndcg_at_10
value: 61.379
- type: ndcg_at_100
value: 64.535
- type: ndcg_at_1000
value: 65.888
- type: ndcg_at_3
value: 56.958
- type: ndcg_at_5
value: 59.434
- type: precision_at_1
value: 71.843
- type: precision_at_10
value: 12.686
- type: precision_at_100
value: 1.517
- type: precision_at_1000
value: 0.16999999999999998
- type: precision_at_3
value: 35.778
- type: precision_at_5
value: 23.422
- type: recall_at_1
value: 35.922
- type: recall_at_10
value: 63.43
- type: recall_at_100
value: 75.868
- type: recall_at_1000
value: 84.88900000000001
- type: recall_at_3
value: 53.666000000000004
- type: recall_at_5
value: 58.555
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 79.4408
- type: ap
value: 73.52820871620366
- type: f1
value: 79.36240238685001
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.826999999999998
- type: map_at_10
value: 34.04
- type: map_at_100
value: 35.226
- type: map_at_1000
value: 35.275
- type: map_at_3
value: 30.165999999999997
- type: map_at_5
value: 32.318000000000005
- type: mrr_at_1
value: 22.464000000000002
- type: mrr_at_10
value: 34.631
- type: mrr_at_100
value: 35.752
- type: mrr_at_1000
value: 35.795
- type: mrr_at_3
value: 30.798
- type: mrr_at_5
value: 32.946999999999996
- type: ndcg_at_1
value: 22.464000000000002
- type: ndcg_at_10
value: 40.919
- type: ndcg_at_100
value: 46.632
- type: ndcg_at_1000
value: 47.833
- type: ndcg_at_3
value: 32.992
- type: ndcg_at_5
value: 36.834
- type: precision_at_1
value: 22.464000000000002
- type: precision_at_10
value: 6.494
- type: precision_at_100
value: 0.9369999999999999
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.021
- type: precision_at_5
value: 10.347000000000001
- type: recall_at_1
value: 21.826999999999998
- type: recall_at_10
value: 62.132
- type: recall_at_100
value: 88.55199999999999
- type: recall_at_1000
value: 97.707
- type: recall_at_3
value: 40.541
- type: recall_at_5
value: 49.739
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 95.68399452804377
- type: f1
value: 95.25490609832268
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 83.15321477428182
- type: f1
value: 60.35476439087966
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.92669804976462
- type: f1
value: 69.22815107207565
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.4855413584398
- type: f1
value: 72.92107516103387
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.412679360205544
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.09211869875204
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.540919056982545
- type: mrr
value: 31.529904607063536
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.745
- type: map_at_10
value: 12.013
- type: map_at_100
value: 15.040000000000001
- type: map_at_1000
value: 16.427
- type: map_at_3
value: 8.841000000000001
- type: map_at_5
value: 10.289
- type: mrr_at_1
value: 45.201
- type: mrr_at_10
value: 53.483999999999995
- type: mrr_at_100
value: 54.20700000000001
- type: mrr_at_1000
value: 54.252
- type: mrr_at_3
value: 51.29
- type: mrr_at_5
value: 52.73
- type: ndcg_at_1
value: 43.808
- type: ndcg_at_10
value: 32.445
- type: ndcg_at_100
value: 30.031000000000002
- type: ndcg_at_1000
value: 39.007
- type: ndcg_at_3
value: 37.204
- type: ndcg_at_5
value: 35.07
- type: precision_at_1
value: 45.201
- type: precision_at_10
value: 23.684
- type: precision_at_100
value: 7.600999999999999
- type: precision_at_1000
value: 2.043
- type: precision_at_3
value: 33.953
- type: precision_at_5
value: 29.412
- type: recall_at_1
value: 5.745
- type: recall_at_10
value: 16.168
- type: recall_at_100
value: 30.875999999999998
- type: recall_at_1000
value: 62.686
- type: recall_at_3
value: 9.75
- type: recall_at_5
value: 12.413
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.828
- type: map_at_10
value: 53.239000000000004
- type: map_at_100
value: 54.035999999999994
- type: map_at_1000
value: 54.067
- type: map_at_3
value: 49.289
- type: map_at_5
value: 51.784
- type: mrr_at_1
value: 42.497
- type: mrr_at_10
value: 55.916999999999994
- type: mrr_at_100
value: 56.495
- type: mrr_at_1000
value: 56.516999999999996
- type: mrr_at_3
value: 52.800000000000004
- type: mrr_at_5
value: 54.722
- type: ndcg_at_1
value: 42.468
- type: ndcg_at_10
value: 60.437
- type: ndcg_at_100
value: 63.731
- type: ndcg_at_1000
value: 64.41799999999999
- type: ndcg_at_3
value: 53.230999999999995
- type: ndcg_at_5
value: 57.26
- type: precision_at_1
value: 42.468
- type: precision_at_10
value: 9.47
- type: precision_at_100
value: 1.1360000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.724999999999998
- type: precision_at_5
value: 16.593
- type: recall_at_1
value: 37.828
- type: recall_at_10
value: 79.538
- type: recall_at_100
value: 93.646
- type: recall_at_1000
value: 98.72999999999999
- type: recall_at_3
value: 61.134
- type: recall_at_5
value: 70.377
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.548
- type: map_at_10
value: 84.466
- type: map_at_100
value: 85.10600000000001
- type: map_at_1000
value: 85.123
- type: map_at_3
value: 81.57600000000001
- type: map_at_5
value: 83.399
- type: mrr_at_1
value: 81.24
- type: mrr_at_10
value: 87.457
- type: mrr_at_100
value: 87.574
- type: mrr_at_1000
value: 87.575
- type: mrr_at_3
value: 86.507
- type: mrr_at_5
value: 87.205
- type: ndcg_at_1
value: 81.25
- type: ndcg_at_10
value: 88.203
- type: ndcg_at_100
value: 89.457
- type: ndcg_at_1000
value: 89.563
- type: ndcg_at_3
value: 85.465
- type: ndcg_at_5
value: 87.007
- type: precision_at_1
value: 81.25
- type: precision_at_10
value: 13.373
- type: precision_at_100
value: 1.5270000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.417
- type: precision_at_5
value: 24.556
- type: recall_at_1
value: 70.548
- type: recall_at_10
value: 95.208
- type: recall_at_100
value: 99.514
- type: recall_at_1000
value: 99.988
- type: recall_at_3
value: 87.214
- type: recall_at_5
value: 91.696
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 53.04822095496839
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 60.30778476474675
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.692
- type: map_at_10
value: 11.766
- type: map_at_100
value: 13.904
- type: map_at_1000
value: 14.216999999999999
- type: map_at_3
value: 8.245
- type: map_at_5
value: 9.92
- type: mrr_at_1
value: 23.0
- type: mrr_at_10
value: 33.78
- type: mrr_at_100
value: 34.922
- type: mrr_at_1000
value: 34.973
- type: mrr_at_3
value: 30.2
- type: mrr_at_5
value: 32.565
- type: ndcg_at_1
value: 23.0
- type: ndcg_at_10
value: 19.863
- type: ndcg_at_100
value: 28.141
- type: ndcg_at_1000
value: 33.549
- type: ndcg_at_3
value: 18.434
- type: ndcg_at_5
value: 16.384
- type: precision_at_1
value: 23.0
- type: precision_at_10
value: 10.39
- type: precision_at_100
value: 2.235
- type: precision_at_1000
value: 0.35300000000000004
- type: precision_at_3
value: 17.133000000000003
- type: precision_at_5
value: 14.44
- type: recall_at_1
value: 4.692
- type: recall_at_10
value: 21.025
- type: recall_at_100
value: 45.324999999999996
- type: recall_at_1000
value: 71.675
- type: recall_at_3
value: 10.440000000000001
- type: recall_at_5
value: 14.64
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.96178184892842
- type: cos_sim_spearman
value: 79.6487740813199
- type: euclidean_pearson
value: 82.06661161625023
- type: euclidean_spearman
value: 79.64876769031183
- type: manhattan_pearson
value: 82.07061164575131
- type: manhattan_spearman
value: 79.65197039464537
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.15305604100027
- type: cos_sim_spearman
value: 74.27447427941591
- type: euclidean_pearson
value: 80.52737337565307
- type: euclidean_spearman
value: 74.27416077132192
- type: manhattan_pearson
value: 80.53728571140387
- type: manhattan_spearman
value: 74.28853605753457
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 83.44386080639279
- type: cos_sim_spearman
value: 84.17947648159536
- type: euclidean_pearson
value: 83.34145388129387
- type: euclidean_spearman
value: 84.17947648159536
- type: manhattan_pearson
value: 83.30699061927966
- type: manhattan_spearman
value: 84.18125737380451
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 81.57392220985612
- type: cos_sim_spearman
value: 78.80745014464101
- type: euclidean_pearson
value: 80.01660371487199
- type: euclidean_spearman
value: 78.80741240102256
- type: manhattan_pearson
value: 79.96810779507953
- type: manhattan_spearman
value: 78.75600400119448
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.85421063026625
- type: cos_sim_spearman
value: 87.55320285299192
- type: euclidean_pearson
value: 86.69750143323517
- type: euclidean_spearman
value: 87.55320284326378
- type: manhattan_pearson
value: 86.63379169960379
- type: manhattan_spearman
value: 87.4815029877984
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.31314130411842
- type: cos_sim_spearman
value: 85.3489588181433
- type: euclidean_pearson
value: 84.13240933463535
- type: euclidean_spearman
value: 85.34902871403281
- type: manhattan_pearson
value: 84.01183086503559
- type: manhattan_spearman
value: 85.19316703166102
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.09979781689536
- type: cos_sim_spearman
value: 88.87813323759015
- type: euclidean_pearson
value: 88.65413031123792
- type: euclidean_spearman
value: 88.87813323759015
- type: manhattan_pearson
value: 88.61818758256024
- type: manhattan_spearman
value: 88.81044100494604
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30693258111531
- type: cos_sim_spearman
value: 62.195516523251946
- type: euclidean_pearson
value: 62.951283701049476
- type: euclidean_spearman
value: 62.195516523251946
- type: manhattan_pearson
value: 63.068322281439535
- type: manhattan_spearman
value: 62.10621171028406
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.27092833763909
- type: cos_sim_spearman
value: 84.84429717949759
- type: euclidean_pearson
value: 84.8516966060792
- type: euclidean_spearman
value: 84.84429717949759
- type: manhattan_pearson
value: 84.82203139242881
- type: manhattan_spearman
value: 84.8358503952945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 83.10290863981409
- type: mrr
value: 95.31168450286097
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 52.161
- type: map_at_10
value: 62.138000000000005
- type: map_at_100
value: 62.769
- type: map_at_1000
value: 62.812
- type: map_at_3
value: 59.111000000000004
- type: map_at_5
value: 60.995999999999995
- type: mrr_at_1
value: 55.333
- type: mrr_at_10
value: 63.504000000000005
- type: mrr_at_100
value: 64.036
- type: mrr_at_1000
value: 64.08
- type: mrr_at_3
value: 61.278
- type: mrr_at_5
value: 62.778
- type: ndcg_at_1
value: 55.333
- type: ndcg_at_10
value: 66.678
- type: ndcg_at_100
value: 69.415
- type: ndcg_at_1000
value: 70.453
- type: ndcg_at_3
value: 61.755
- type: ndcg_at_5
value: 64.546
- type: precision_at_1
value: 55.333
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.043
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 24.221999999999998
- type: precision_at_5
value: 16.333000000000002
- type: recall_at_1
value: 52.161
- type: recall_at_10
value: 79.156
- type: recall_at_100
value: 91.333
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 66.43299999999999
- type: recall_at_5
value: 73.272
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81287128712871
- type: cos_sim_ap
value: 95.30034785910676
- type: cos_sim_f1
value: 90.28629856850716
- type: cos_sim_precision
value: 92.36401673640168
- type: cos_sim_recall
value: 88.3
- type: dot_accuracy
value: 99.81287128712871
- type: dot_ap
value: 95.30034785910676
- type: dot_f1
value: 90.28629856850716
- type: dot_precision
value: 92.36401673640168
- type: dot_recall
value: 88.3
- type: euclidean_accuracy
value: 99.81287128712871
- type: euclidean_ap
value: 95.30034785910676
- type: euclidean_f1
value: 90.28629856850716
- type: euclidean_precision
value: 92.36401673640168
- type: euclidean_recall
value: 88.3
- type: manhattan_accuracy
value: 99.80990099009901
- type: manhattan_ap
value: 95.26880751950654
- type: manhattan_f1
value: 90.22177419354838
- type: manhattan_precision
value: 90.95528455284553
- type: manhattan_recall
value: 89.5
- type: max_accuracy
value: 99.81287128712871
- type: max_ap
value: 95.30034785910676
- type: max_f1
value: 90.28629856850716
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 58.518662504351184
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.96168178378587
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 52.04862593471896
- type: mrr
value: 52.97238402936932
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.092545236479946
- type: cos_sim_spearman
value: 31.599851000175498
- type: dot_pearson
value: 30.092542723901676
- type: dot_spearman
value: 31.599851000175498
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.189
- type: map_at_10
value: 1.662
- type: map_at_100
value: 9.384
- type: map_at_1000
value: 22.669
- type: map_at_3
value: 0.5559999999999999
- type: map_at_5
value: 0.9039999999999999
- type: mrr_at_1
value: 68.0
- type: mrr_at_10
value: 81.01899999999999
- type: mrr_at_100
value: 81.01899999999999
- type: mrr_at_1000
value: 81.01899999999999
- type: mrr_at_3
value: 79.333
- type: mrr_at_5
value: 80.733
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 65.913
- type: ndcg_at_100
value: 51.895
- type: ndcg_at_1000
value: 46.967
- type: ndcg_at_3
value: 65.49199999999999
- type: ndcg_at_5
value: 66.69699999999999
- type: precision_at_1
value: 68.0
- type: precision_at_10
value: 71.6
- type: precision_at_100
value: 53.66
- type: precision_at_1000
value: 21.124000000000002
- type: precision_at_3
value: 72.667
- type: precision_at_5
value: 74.0
- type: recall_at_1
value: 0.189
- type: recall_at_10
value: 1.913
- type: recall_at_100
value: 12.601999999999999
- type: recall_at_1000
value: 44.296
- type: recall_at_3
value: 0.605
- type: recall_at_5
value: 1.018
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.701
- type: map_at_10
value: 10.445
- type: map_at_100
value: 17.324
- type: map_at_1000
value: 19.161
- type: map_at_3
value: 5.497
- type: map_at_5
value: 7.278
- type: mrr_at_1
value: 30.612000000000002
- type: mrr_at_10
value: 45.534
- type: mrr_at_100
value: 45.792
- type: mrr_at_1000
value: 45.806999999999995
- type: mrr_at_3
value: 37.755
- type: mrr_at_5
value: 43.469
- type: ndcg_at_1
value: 26.531
- type: ndcg_at_10
value: 26.235000000000003
- type: ndcg_at_100
value: 39.17
- type: ndcg_at_1000
value: 51.038
- type: ndcg_at_3
value: 23.625
- type: ndcg_at_5
value: 24.338
- type: precision_at_1
value: 30.612000000000002
- type: precision_at_10
value: 24.285999999999998
- type: precision_at_100
value: 8.224
- type: precision_at_1000
value: 1.6179999999999999
- type: precision_at_3
value: 24.490000000000002
- type: precision_at_5
value: 24.898
- type: recall_at_1
value: 2.701
- type: recall_at_10
value: 17.997
- type: recall_at_100
value: 51.766999999999996
- type: recall_at_1000
value: 87.863
- type: recall_at_3
value: 6.295000000000001
- type: recall_at_5
value: 9.993
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 73.3474
- type: ap
value: 15.393431414459924
- type: f1
value: 56.466681887882416
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 62.062818336163
- type: f1
value: 62.11230840463252
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 42.464892820845115
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.15962329379508
- type: cos_sim_ap
value: 74.73674057919256
- type: cos_sim_f1
value: 68.81245642574947
- type: cos_sim_precision
value: 61.48255813953488
- type: cos_sim_recall
value: 78.12664907651715
- type: dot_accuracy
value: 86.15962329379508
- type: dot_ap
value: 74.7367634988281
- type: dot_f1
value: 68.81245642574947
- type: dot_precision
value: 61.48255813953488
- type: dot_recall
value: 78.12664907651715
- type: euclidean_accuracy
value: 86.15962329379508
- type: euclidean_ap
value: 74.7367761466634
- type: euclidean_f1
value: 68.81245642574947
- type: euclidean_precision
value: 61.48255813953488
- type: euclidean_recall
value: 78.12664907651715
- type: manhattan_accuracy
value: 86.21326816474935
- type: manhattan_ap
value: 74.64416473733951
- type: manhattan_f1
value: 68.80924855491331
- type: manhattan_precision
value: 61.23456790123457
- type: manhattan_recall
value: 78.52242744063325
- type: max_accuracy
value: 86.21326816474935
- type: max_ap
value: 74.7367761466634
- type: max_f1
value: 68.81245642574947
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.97620988085536
- type: cos_sim_ap
value: 86.08680845745758
- type: cos_sim_f1
value: 78.02793637114438
- type: cos_sim_precision
value: 73.11082699683736
- type: cos_sim_recall
value: 83.65414228518632
- type: dot_accuracy
value: 88.97620988085536
- type: dot_ap
value: 86.08681149437946
- type: dot_f1
value: 78.02793637114438
- type: dot_precision
value: 73.11082699683736
- type: dot_recall
value: 83.65414228518632
- type: euclidean_accuracy
value: 88.97620988085536
- type: euclidean_ap
value: 86.08681215460771
- type: euclidean_f1
value: 78.02793637114438
- type: euclidean_precision
value: 73.11082699683736
- type: euclidean_recall
value: 83.65414228518632
- type: manhattan_accuracy
value: 88.88888888888889
- type: manhattan_ap
value: 86.02916327562438
- type: manhattan_f1
value: 78.02063045516843
- type: manhattan_precision
value: 73.38851947346994
- type: manhattan_recall
value: 83.2768709578072
- type: max_accuracy
value: 88.97620988085536
- type: max_ap
value: 86.08681215460771
- type: max_f1
value: 78.02793637114438
---
# ngxson/jina-embeddings-v2-base-en-Q8_0-GGUF
This model was converted to GGUF format from [`jinaai/jina-embeddings-v2-base-en`](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ngxson/jina-embeddings-v2-base-en-Q8_0-GGUF --hf-file jina-embeddings-v2-base-en-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ngxson/jina-embeddings-v2-base-en-Q8_0-GGUF --hf-file jina-embeddings-v2-base-en-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ngxson/jina-embeddings-v2-base-en-Q8_0-GGUF --hf-file jina-embeddings-v2-base-en-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ngxson/jina-embeddings-v2-base-en-Q8_0-GGUF --hf-file jina-embeddings-v2-base-en-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
aisingapore/sea-lion-7b-instruct-research | aisingapore | text-generation | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"id",
"ms",
"tl",
"my",
"vi",
"th",
"lo",
"km",
"ta",
"arxiv:2309.06085",
"base_model:aisingapore/sea-lion-7b",
"base_model:finetune:aisingapore/sea-lion-7b",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-06T05:40:57 | 2024-11-14T05:46:01 | 42 | 14 | ---
base_model: aisingapore/sea-lion-7b
language:
- en
- zh
- id
- ms
- tl
- my
- vi
- th
- lo
- km
- ta
license: cc-by-nc-sa-4.0
new_version: aisingapore/gemma2-9b-cpt-sea-lionv3-instruct
---
# SEA-LION-7B-Instruct-Research
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
The size of the models range from 3 billion to 7 billion parameters.
This is the card for the SEA-LION 7B Instruct (Non-Commercial) model.
For more details on the base model, please refer to the [base model's model card](https://huggingface.co/aisingapore/sea-lion-7b).
For the commercially permissive model, please refer to the [SEA-LION-7B-Instruct](https://huggingface.co/aisingapore/sea-lion-7b-instruct).
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
## Model Details
### Model Description
The SEA-LION model is a significant leap forward in the field of Natural Language Processing,
specifically trained to understand the SEA regional context.
SEA-LION is built on the robust MPT architecture and has a vocabulary size of 256K.
For tokenization, the model employs our custom SEABPETokenizer, which is specially tailored for SEA languages, ensuring optimal model performance.
The pre-training data for the base SEA-LION model encompasses 980B tokens.
The model was then further instruction-tuned on <b>Indonesian data only</b>.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages:** English, Chinese, Indonesian, Malay, Thai, Vietnamese, Filipino, Tamil, Burmese, Khmer, Lao
- **License:** CC BY-NC-SA 4.0 License
### Benchmark Performance
SEA-LION-7B-Instruct-NC performs better than other models of comparable size when tested on tasks in the Indonesian language.
We evaluated SEA-LION-7B-Instruct-NC on the [BHASA benchmark](https://arxiv.org/abs/2309.06085) and
compared it against [Llama-2-7B](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf), [Mistral-7B](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
and [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b-instruct).
We only evaluated it on the Indonesian tasks as the model was only instruction-tuned in Indonesian.
The evaluation was done zero-shot with Indonesian prompts and only a sample of 100 - 1000 instances per dataset was used as per the setting described in the BHASA paper.
The scores shown in the tables below have been adjusted to only consider answers provided in the appropriate language.
For Natural Language Understanding (NLU) tasks, we tested the model on Sentiment Analysis (Sent) using the NusaX dataset, Question Answering (QA) using the TyDiQA dataset, and Toxicity Detection (Tox) using the Indonesian Multi-Label Hate Speech Detection dataset. The metrics used are F1 score for all three tasks.
For Natural Language Generation (NLG) tasks, we tested the model on Machine Translation from English to Indonesian (MT-EN-ID) and from Indonesian to English (MT-ID-EN) using the FLORES-200 dataset, and Abstractive Summarization (AbsSum) using the XLSum dataset. The metrics used for Machine Translation are ChrF++ and COMET22, and ROUGE-L is used for Abstractive Summarization.
For Natural Language Reasoning (NLR) tasks, we tested the model on Natural Language Inference (NLI) using the IndoNLI lay dataset and on Causal Reasoning (Causal) using the XCOPA dataset. The metrics are accuracy for both tasks.
| Model | QA (F1) | Sentiment (F1) | Toxicity (F1) | Eng>Indo (ChrF++) | Indo>Eng (ChrF++) | Summary (ROUGE-L) | NLI (Acc) | Causal (Acc) |
|--------------------------------|---------|----------------|---------------|-------------------|-------------------|-------------------|-----------|--------------|
| SEA-LION-7B-Instruct-Research | 24.86 | 76.13 | 24.45 | 52.50 | 46.82 | 15.44 | 33.20 | 23.80 |
| SEA-LION-7B-Instruct | **68.41**| **91.45** | 17.98 | 57.48 | 58.04 | **17.54** | 53.10 | 60.80 |
| SeaLLM 7B v1 | 30.96 | 56.29 | 22.60 | 62.23 | 41.55 | 14.03 | 26.50 | 56.60 |
| SeaLLM 7B v2 | 44.40 | 80.13 | **55.24** | 64.01 | **63.28** | 17.31 | 43.60 | 82.00 |
| Sailor-7B (Base) | 65.43 | 59.48 | 20.48 | **64.27** | 60.68 | 8.69 | 15.10 | 38.40 |
| Sailor-7B-Chat | 38.02 | 87.64 | 52.07 | 64.25 | 61.87 | 15.28 | **68.30** |**85.60** |
| Llama 2 7B Chat | 11.12 | 52.32 | 0.00 | 44.09 | 57.58 | 9.24 | 0.00 | 0.00 |
| Mistral 7B Instruct v0.1 | 38.85 | 74.38 | 20.83 | 30.60 | 51.43 | 15.63 | 28.60 | 50.80 |
| GPT-4 (gpt-4-0314) | 73.60 | 74.14 | 63.96 | 69.38 | 67.53 | 18.71 | 83.20 | 96.00 |
## Technical Specifications
### Model Architecture and Objective
SEA-LION is a decoder model using the MPT architecture.
| Parameter | SEA-LION 7B |
|-----------------|:-----------:|
| Layers | 32 |
| d_model | 4096 |
| head_dim | 32 |
| Vocabulary | 256000 |
| Sequence Length | 2048 |
### Tokenizer Details
We sample 20M lines from the training data to train the tokenizer.<br>
The framework for training is [SentencePiece](https://github.com/google/sentencepiece).<br>
The tokenizer type is Byte-Pair Encoding (BPE).
### Example Usage
```python
# Please use transformers==4.34.1
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("aisingapore/sea-lion-7b-instruct-nc", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("aisingapore/sea-lion-7b-instruct-nc", trust_remote_code=True)
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
prompt = """Apa sentimen dari kalimat berikut ini?
Kalimat: Buku ini sangat membosankan.
Jawaban: """
full_prompt = prompt_template.format(human_prompt=prompt)
tokens = tokenizer(full_prompt, return_tensors="pt")
output = model.generate(tokens["input_ids"], max_new_tokens=20, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## The Team
Lam Wen Zhi Clarence<br>
Leong Wei Qi<br>
Li Yier<br>
Liu Bing Jie Darius<br>
Lovenia Holy<br>
Montalan Jann Railey<br>
Ng Boon Cheong Raymond<br>
Ngui Jian Gang<br>
Nguyen Thanh Ngan<br>
Ong Tat-Wee David<br>
Rengarajan Hamsawardhini<br>
Susanto Yosephine<br>
Tai Ngee Chia<br>
Tan Choon Meng<br>
Teo Jin Howe<br>
Teo Eng Sipp Leslie<br>
Teo Wei Yi<br>
Tjhi William<br>
Yeo Yeow Tong<br>
Yong Xianbin<br>
## Acknowledgements
AI Singapore is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore.
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository](https://github.com/aisingapore/sealion)
## Disclaimer
This the repository for the non-commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claim, damages, or other liability
arising from the use of the released weights and codes. | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
THUDM/cogvlm2-video-llama3-base | THUDM | text-generation | [
"transformers",
"safetensors",
"text-generation",
"chat",
"cogvlm2",
"cogvlm--video",
"conversational",
"custom_code",
"en",
"license:other",
"autotrain_compatible",
"region:us"
] | 2024-07-03T02:22:24 | 2024-07-24T09:52:22 | 42 | 1 | ---
language:
- en
license: other
license_name: cogvlm2
license_link: https://huggingface.co/THUDM/cogvlm2-video-llama3-base/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- cogvlm2
- cogvlm--video
inference: false
---
# CogVLM2-Video-Llama3-Base
[中文版本README](README_zh.md)
## Introduction
CogVLM2-Video achieves state-of-the-art performance on multiple video question answering tasks. It can achieve video
understanding within one minute. We provide two example videos to demonstrate CogVLM2-Video's video understanding and
video temporal grounding capabilities.
<table>
<tr>
<td>
<video width="100%" controls>
<source src="https://github.com/THUDM/CogVLM2/raw/main/resources/videos/lion.mp4" type="video/mp4">
</video>
</td>
<td>
<video width="100%" controls>
<source src="https://github.com/THUDM/CogVLM2/raw/main/resources/videos/basketball.mp4" type="video/mp4">
</video>
</td>
</tr>
</table>
## BenchMark
The following diagram shows the performance of CogVLM2-Video on
the [MVBench](https://github.com/OpenGVLab/Ask-Anything), [VideoChatGPT-Bench](https://github.com/mbzuai-oryx/Video-ChatGPT)
and Zero-shot VideoQA datasets (MSVD-QA, MSRVTT-QA, ActivityNet-QA). Where VCG-* refers to the VideoChatGPTBench, ZS-*
refers to Zero-Shot VideoQA datasets and MV-* refers to main categories in the MVBench.

Performance on VideoChatGPT-Bench and Zero-shot VideoQA dataset:
| Models | VCG-AVG | VCG-CI | VCG-DO | VCG-CU | VCG-TU | VCG-CO | ZS-AVG |
|-----------------------|----------|----------|----------|----------|----------|----------|-----------|
| IG-VLM GPT4V | 3.17 | 3.40 | 2.80 | 3.61 | 2.89 | 3.13 | 65.70 |
| ST-LLM | 3.15 | 3.23 | 3.05 | 3.74 | 2.93 | 2.81 | 62.90 |
| ShareGPT4Video | N/A | N/A | N/A | N/A | N/A | N/A | 46.50 |
| VideoGPT+ | 3.28 | 3.27 | 3.18 | 3.74 | 2.83 | **3.39** | 61.20 |
| VideoChat2_HD_mistral | 3.10 | 3.40 | 2.91 | 3.72 | 2.65 | 2.84 | 57.70 |
| PLLaVA-34B | 3.32 | **3.60** | 3.20 | **3.90** | 2.67 | 3.25 | **68.10** |
| CogVLM2-Video | **3.41** | 3.49 | **3.46** | 3.87 | **2.98** | 3.23 | 66.60 |
Performance on MVBench dataset:
| Models | AVG | AA | AC | AL | AP | AS | CO | CI | EN | ER | FA | FP | MA | MC | MD | OE | OI | OS | ST | SC | UA |
|-----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|
| IG-VLM GPT4V | 43.7 | 72.0 | 39.0 | 40.5 | 63.5 | 55.5 | 52.0 | 11.0 | 31.0 | 59.0 | 46.5 | 47.5 | 22.5 | 12.0 | 12.0 | 18.5 | 59.0 | 29.5 | 83.5 | 45.0 | 73.5 |
| ST-LLM | 54.9 | 84.0 | 36.5 | 31.0 | 53.5 | 66.0 | 46.5 | 58.5 | 34.5 | 41.5 | 44.0 | 44.5 | 78.5 | 56.5 | 42.5 | 80.5 | 73.5 | 38.5 | 86.5 | 43.0 | 58.5 |
| ShareGPT4Video | 51.2 | 79.5 | 35.5 | 41.5 | 39.5 | 49.5 | 46.5 | 51.5 | 28.5 | 39.0 | 40.0 | 25.5 | 75.0 | 62.5 | 50.5 | 82.5 | 54.5 | 32.5 | 84.5 | 51.0 | 54.5 |
| VideoGPT+ | 58.7 | 83.0 | 39.5 | 34.0 | 60.0 | 69.0 | 50.0 | 60.0 | 29.5 | 44.0 | 48.5 | 53.0 | 90.5 | 71.0 | 44.0 | 85.5 | 75.5 | 36.0 | 89.5 | 45.0 | 66.5 |
| VideoChat2_HD_mistral | **62.3** | 79.5 | **60.0** | **87.5** | 50.0 | 68.5 | **93.5** | 71.5 | 36.5 | 45.0 | 49.5 | **87.0** | 40.0 | **76.0** | **92.0** | 53.0 | 62.0 | **45.5** | 36.0 | 44.0 | 69.5 |
| PLLaVA-34B | 58.1 | 82.0 | 40.5 | 49.5 | 53.0 | 67.5 | 66.5 | 59.0 | **39.5** | **63.5** | 47.0 | 50.0 | 70.0 | 43.0 | 37.5 | 68.5 | 67.5 | 36.5 | 91.0 | 51.5 | **79.0** |
| CogVLM2-Video | **62.3** | **85.5** | 41.5 | 31.5 | **65.5** | **79.5** | 58.5 | **77.0** | 28.5 | 42.5 | **54.0** | 57.0 | **91.5** | 73.0 | 48.0 | **91.0** | **78.0** | 36.0 | **91.5** | **47.0** | 68.5 |
## Evaluation details
We follow the previous works to evaluate the performance of our model. In different benchmarks, we craft task-specific
prompts for each benchmark:
``` python
# For MVBench
prompt = f"Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects, and the action and pose of persons. Based on your observations, select the best option that accurately addresses the question.\n " + f"{prompt.replace('Short Answer.', '')}\n" + "Short Answer:"
# For VideoChatGPT-Bench
prompt = f"Carefully watch the video and pay attention to the cause and sequence of events, the detail and movement of objects, and the action and pose of persons. Based on your observations, comprehensively answer the following question. Your answer should be long and cover all the related aspects\n " + f"{prompt.replace('Short Answer.', '')}\n" + "Answer:"
# For Zero-shot VideoQA
prompt = f"The input consists of a sequence of key frames from a video. Answer the question comprehensively including all the possible verbs and nouns that can discribe the events, followed by significant events, characters, or objects that appear throughout the frames.\n " + f"{prompt.replace('Short Answer.', '')}\n" + "Answer:"
```
For evaluation codes, please refer to
the [evaluation script](https://github.com/magic-research/PLLaVA/blob/main/README.md) in PLLaVA.
## Using This Model
This repository is a `base` version model and does not support chat.
You can quickly install the Python package dependencies and run model inference in
our [github](https://github.com/THUDM/CogVLM2/tree/main/video_demo).
## License
This model is released under the
CogVLM2 [LICENSE](./LICENSE).
For models built with Meta Llama 3, please also adhere to
the [LLAMA3_LICENSE](./LLAMA3_LICENSE).
## Training details
Pleaser refer to our technical report for training formula and hyperparameters.
| [
"QUESTION_ANSWERING"
] | [
"CRAFT"
] |
TRI-ML/DCLM-1B-v0 | TRI-ML | null | [
"transformers",
"safetensors",
"openlm",
"arxiv:2406.11794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-16T18:04:55 | 2024-07-25T23:21:51 | 42 | 12 | ---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="300" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Check out our more recent, higher performing model here! https://huggingface.co/TRI-ML/DCLM-1B/
# Model Card for DCLM-1B-v0
DCLM-1B-v0 is a 1.4 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|:------:|:-----------------:|:--------:|:-------------:|:-----------------:|:----------------:|
| 1.4B | 2.6T | 24 | 2048 | 16 | 2048 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apache 2.0
- **Contact:** [email protected]
- **Date:** July 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
## Quickstart
First install open_lm
```
pip install git+https://github.com/mlfoundations/open_lm.git
```
Then you can load the model using HF's Auto classes as follows:
```python
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TRI-ML/DCLM-1B-v0")
model = AutoModelForCausalLM.from_pretrained("TRI-ML/DCLM-1B-v0")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 1e-2 (peak)
- **Weight Decay:** 1e-2
- **Batch Size:** 2048 sequences
- **Sequence Length:** 2048 tokens
- **Total Training Tokens:** 2.6T
- **Hardware:** Trained on H100 GPUs
We train our 1.4B model for 2.6T tokens on DCLM-Baseline.
Similar to the 7B model training recipe described in Appendix P of our paper,
we train for 2.3T tokens on DCLM-baseline combined with the StarCoder and ProofPile2 datasets,
with the hyper-parameters described above.
Note that we use a schedule set for the full dataset, and stop training early at 2.3T tokens.
Then, we cool down the model on the same dataset to the cooldown LR over 200B tokens.
We will update our paper soon with more training details.
## Evaluation
Here are the evaluation results for DCLM-1B on various tasks (using [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite)
| Task | Score |
|------------------------------------------|---------|
| AGI Eval LSAT AR | 0.2348 |
| AGI Eval LSAT LR | 0.3098 |
| AGI Eval LSAT RC | 0.3321 |
| AGI Eval SAT English | 0.3883 |
| AGI Eval SAT Math (CoT) | 0.0182 |
| AQuA (CoT) | 0.0245 |
| ARC (challenge) | 0.4343 |
| ARC (easy) | 0.7290 |
| BBQ | 0.4670 |
| BigBench Conceptual Combinations | 0.4660 |
| BigBench Conlang Translation | 0.0732 |
| BigBench CS Algorithms | 0.4515 |
| BigBench Dyck Languages | 0.1990 |
| BigBench Elementary Math QA | 0.2558 |
| BigBench Language Identification | 0.2911 |
| BigBench Logical Deduction | 0.2480 |
| BigBench Misconceptions | 0.5068 |
| BigBench Novel Concepts | 0.5312 |
| BigBench Operators | 0.2714 |
| BigBench QA Wikidata | 0.6687 |
| BigBench Repeat Copy Logic | 0.1562 |
| BigBench Strange Stories | 0.6839 |
| BigBench Strategy QA | 0.5762 |
| BigBench Understanding Fables | 0.4127 |
| BoolQ | 0.7131 |
| CommonSenseQA | 0.6110 |
| COPA | 0.7900 |
| CoQA | 0.4257 |
| Enterprise PII Classification | 0.5110 |
| GPQA Diamond | 0.2121 |
| GPQA | 0.2344 |
| GSM8K (CoT) | 0.0371 |
| HellaSwag | 0.7087 |
| HellaSwag (zero-shot) | 0.7001 |
| Jeopardy | 0.4218 |
| LAMBADA (OpenAI) | 0.6938 |
| LogiQA | 0.3026 |
| MathQA | 0.2598 |
| MMLU (few-shot) | 0.4193 |
| MMLU (zero-shot) | 0.3543 |
| OpenBookQA | 0.4380 |
| PIQA | 0.7786 |
| PubMedQA (labeled) | 0.2560 |
| Simple Arithmetic (no spaces) | 0.0280 |
| Simple Arithmetic (with spaces) | 0.0300 |
| SIQA | 0.6735 |
| SQuAD | 0.5424 |
| SVAMP (CoT) | 0.1800 |
| TriviaQA (small subset) | 0.3603 |
| Winogender (MC female) | 0.4833 |
| Winogender (MC male) | 0.5000 |
| Winograd | 0.8352 |
| Winogrande | 0.6527 |
Note: All scores are presented as decimal values between 0 and 1, representing the proportion of correct answers or the model's performance on each task.
Below we compare to the recently released SmolLM (https://huggingface.co/blog/smollm) on key benchmarks. As described in the paper, Core accuracy is the average of
centered accuracy on 22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged over 53 tasks.
We evaluate the models using llm-foundry.
| Task | Core | Extended | MMLU 5-shot |
|:---------:|:------:|:----------:|:-------------:|
| DCLM-1B | 42.3 | 25.1 | 41.9 |
| SmolLM | 36.3 | 21.2 | 30.0 |
## Limitations and Biases
While DCLM-1B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
| [
"TRANSLATION"
] | [
"PUBMEDQA"
] |
yoeven/multilingual-e5-large-instruct-Q5_0-GGUF | yoeven | null | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:quantized:intfloat/multilingual-e5-large-instruct",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | 2025-01-06T13:50:45 | 2025-01-06T13:50:51 | 42 | 2 | ---
base_model: intfloat/multilingual-e5-large-instruct
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- sentence-transformers
- transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.23880597014924
- type: ap
value: 39.07351965022687
- type: f1
value: 70.04836733862683
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 66.71306209850107
- type: ap
value: 79.01499914759529
- type: f1
value: 64.81951817560703
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 73.85307346326837
- type: ap
value: 22.447519885878737
- type: f1
value: 61.0162730745633
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.04925053533191
- type: ap
value: 23.44983217128922
- type: f1
value: 62.5723230907759
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.28742500000001
- type: ap
value: 94.8449918887462
- type: f1
value: 96.28680923610432
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 56.716
- type: f1
value: 55.76510398266401
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 52.99999999999999
- type: f1
value: 52.00829994765178
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.806000000000004
- type: f1
value: 48.082345914983634
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.507999999999996
- type: f1
value: 47.68752844642045
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.709999999999994
- type: f1
value: 47.05870376637181
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 44.662000000000006
- type: f1
value: 43.42371965372771
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.721
- type: map_at_10
value: 49.221
- type: map_at_100
value: 49.884
- type: map_at_1000
value: 49.888
- type: map_at_3
value: 44.31
- type: map_at_5
value: 47.276
- type: mrr_at_1
value: 32.432
- type: mrr_at_10
value: 49.5
- type: mrr_at_100
value: 50.163000000000004
- type: mrr_at_1000
value: 50.166
- type: mrr_at_3
value: 44.618
- type: mrr_at_5
value: 47.541
- type: ndcg_at_1
value: 31.721
- type: ndcg_at_10
value: 58.384
- type: ndcg_at_100
value: 61.111000000000004
- type: ndcg_at_1000
value: 61.187999999999995
- type: ndcg_at_3
value: 48.386
- type: ndcg_at_5
value: 53.708999999999996
- type: precision_at_1
value: 31.721
- type: precision_at_10
value: 8.741
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.609
- type: recall_at_1
value: 31.721
- type: recall_at_10
value: 87.411
- type: recall_at_100
value: 99.075
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.044
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 46.40419580759799
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 40.48593255007969
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 63.889179122289995
- type: mrr
value: 77.61146286769556
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 88.15075203727929
- type: cos_sim_spearman
value: 86.9622224570873
- type: euclidean_pearson
value: 86.70473853624121
- type: euclidean_spearman
value: 86.9622224570873
- type: manhattan_pearson
value: 86.21089380980065
- type: manhattan_spearman
value: 86.75318154937008
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.65553235908142
- type: f1
value: 99.60681976339595
- type: precision
value: 99.58246346555325
- type: recall
value: 99.65553235908142
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26260180497468
- type: f1
value: 99.14520507740848
- type: precision
value: 99.08650671362535
- type: recall
value: 99.26260180497468
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.07412538967787
- type: f1
value: 97.86629719431936
- type: precision
value: 97.76238309664012
- type: recall
value: 98.07412538967787
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.42074776197998
- type: f1
value: 99.38564156573635
- type: precision
value: 99.36808846761454
- type: recall
value: 99.42074776197998
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 85.73376623376623
- type: f1
value: 85.68480707214599
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 40.935218072113855
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.276389017675264
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 27.764166666666668
- type: map_at_10
value: 37.298166666666674
- type: map_at_100
value: 38.530166666666666
- type: map_at_1000
value: 38.64416666666667
- type: map_at_3
value: 34.484833333333334
- type: map_at_5
value: 36.0385
- type: mrr_at_1
value: 32.93558333333333
- type: mrr_at_10
value: 41.589749999999995
- type: mrr_at_100
value: 42.425333333333334
- type: mrr_at_1000
value: 42.476333333333336
- type: mrr_at_3
value: 39.26825
- type: mrr_at_5
value: 40.567083333333336
- type: ndcg_at_1
value: 32.93558333333333
- type: ndcg_at_10
value: 42.706583333333334
- type: ndcg_at_100
value: 47.82483333333333
- type: ndcg_at_1000
value: 49.95733333333334
- type: ndcg_at_3
value: 38.064750000000004
- type: ndcg_at_5
value: 40.18158333333333
- type: precision_at_1
value: 32.93558333333333
- type: precision_at_10
value: 7.459833333333334
- type: precision_at_100
value: 1.1830833333333335
- type: precision_at_1000
value: 0.15608333333333332
- type: precision_at_3
value: 17.5235
- type: precision_at_5
value: 12.349833333333333
- type: recall_at_1
value: 27.764166666666668
- type: recall_at_10
value: 54.31775
- type: recall_at_100
value: 76.74350000000001
- type: recall_at_1000
value: 91.45208333333332
- type: recall_at_3
value: 41.23425
- type: recall_at_5
value: 46.73983333333334
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 12.969
- type: map_at_10
value: 21.584999999999997
- type: map_at_100
value: 23.3
- type: map_at_1000
value: 23.5
- type: map_at_3
value: 18.218999999999998
- type: map_at_5
value: 19.983
- type: mrr_at_1
value: 29.316
- type: mrr_at_10
value: 40.033
- type: mrr_at_100
value: 40.96
- type: mrr_at_1000
value: 41.001
- type: mrr_at_3
value: 37.123
- type: mrr_at_5
value: 38.757999999999996
- type: ndcg_at_1
value: 29.316
- type: ndcg_at_10
value: 29.858
- type: ndcg_at_100
value: 36.756
- type: ndcg_at_1000
value: 40.245999999999995
- type: ndcg_at_3
value: 24.822
- type: ndcg_at_5
value: 26.565
- type: precision_at_1
value: 29.316
- type: precision_at_10
value: 9.186
- type: precision_at_100
value: 1.6549999999999998
- type: precision_at_1000
value: 0.22999999999999998
- type: precision_at_3
value: 18.436
- type: precision_at_5
value: 13.876
- type: recall_at_1
value: 12.969
- type: recall_at_10
value: 35.142
- type: recall_at_100
value: 59.143
- type: recall_at_1000
value: 78.594
- type: recall_at_3
value: 22.604
- type: recall_at_5
value: 27.883000000000003
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.527999999999999
- type: map_at_10
value: 17.974999999999998
- type: map_at_100
value: 25.665
- type: map_at_1000
value: 27.406000000000002
- type: map_at_3
value: 13.017999999999999
- type: map_at_5
value: 15.137
- type: mrr_at_1
value: 62.5
- type: mrr_at_10
value: 71.891
- type: mrr_at_100
value: 72.294
- type: mrr_at_1000
value: 72.296
- type: mrr_at_3
value: 69.958
- type: mrr_at_5
value: 71.121
- type: ndcg_at_1
value: 50.875
- type: ndcg_at_10
value: 38.36
- type: ndcg_at_100
value: 44.235
- type: ndcg_at_1000
value: 52.154
- type: ndcg_at_3
value: 43.008
- type: ndcg_at_5
value: 40.083999999999996
- type: precision_at_1
value: 62.5
- type: precision_at_10
value: 30.0
- type: precision_at_100
value: 10.038
- type: precision_at_1000
value: 2.0869999999999997
- type: precision_at_3
value: 46.833000000000006
- type: precision_at_5
value: 38.800000000000004
- type: recall_at_1
value: 8.527999999999999
- type: recall_at_10
value: 23.828
- type: recall_at_100
value: 52.322
- type: recall_at_1000
value: 77.143
- type: recall_at_3
value: 14.136000000000001
- type: recall_at_5
value: 17.761
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.51
- type: f1
value: 47.632159862049896
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 60.734
- type: map_at_10
value: 72.442
- type: map_at_100
value: 72.735
- type: map_at_1000
value: 72.75
- type: map_at_3
value: 70.41199999999999
- type: map_at_5
value: 71.80499999999999
- type: mrr_at_1
value: 65.212
- type: mrr_at_10
value: 76.613
- type: mrr_at_100
value: 76.79899999999999
- type: mrr_at_1000
value: 76.801
- type: mrr_at_3
value: 74.8
- type: mrr_at_5
value: 76.12400000000001
- type: ndcg_at_1
value: 65.212
- type: ndcg_at_10
value: 77.988
- type: ndcg_at_100
value: 79.167
- type: ndcg_at_1000
value: 79.452
- type: ndcg_at_3
value: 74.362
- type: ndcg_at_5
value: 76.666
- type: precision_at_1
value: 65.212
- type: precision_at_10
value: 10.003
- type: precision_at_100
value: 1.077
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 29.518
- type: precision_at_5
value: 19.016
- type: recall_at_1
value: 60.734
- type: recall_at_10
value: 90.824
- type: recall_at_100
value: 95.71600000000001
- type: recall_at_1000
value: 97.577
- type: recall_at_3
value: 81.243
- type: recall_at_5
value: 86.90299999999999
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 23.845
- type: map_at_10
value: 39.281
- type: map_at_100
value: 41.422
- type: map_at_1000
value: 41.593
- type: map_at_3
value: 34.467
- type: map_at_5
value: 37.017
- type: mrr_at_1
value: 47.531
- type: mrr_at_10
value: 56.204
- type: mrr_at_100
value: 56.928999999999995
- type: mrr_at_1000
value: 56.962999999999994
- type: mrr_at_3
value: 54.115
- type: mrr_at_5
value: 55.373000000000005
- type: ndcg_at_1
value: 47.531
- type: ndcg_at_10
value: 47.711999999999996
- type: ndcg_at_100
value: 54.510999999999996
- type: ndcg_at_1000
value: 57.103
- type: ndcg_at_3
value: 44.145
- type: ndcg_at_5
value: 45.032
- type: precision_at_1
value: 47.531
- type: precision_at_10
value: 13.194
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.249
- type: precision_at_3
value: 29.424
- type: precision_at_5
value: 21.451
- type: recall_at_1
value: 23.845
- type: recall_at_10
value: 54.967
- type: recall_at_100
value: 79.11399999999999
- type: recall_at_1000
value: 94.56700000000001
- type: recall_at_3
value: 40.256
- type: recall_at_5
value: 46.215
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.819
- type: map_at_10
value: 60.889
- type: map_at_100
value: 61.717999999999996
- type: map_at_1000
value: 61.778
- type: map_at_3
value: 57.254000000000005
- type: map_at_5
value: 59.541
- type: mrr_at_1
value: 75.638
- type: mrr_at_10
value: 82.173
- type: mrr_at_100
value: 82.362
- type: mrr_at_1000
value: 82.37
- type: mrr_at_3
value: 81.089
- type: mrr_at_5
value: 81.827
- type: ndcg_at_1
value: 75.638
- type: ndcg_at_10
value: 69.317
- type: ndcg_at_100
value: 72.221
- type: ndcg_at_1000
value: 73.382
- type: ndcg_at_3
value: 64.14
- type: ndcg_at_5
value: 67.07600000000001
- type: precision_at_1
value: 75.638
- type: precision_at_10
value: 14.704999999999998
- type: precision_at_100
value: 1.698
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 41.394999999999996
- type: precision_at_5
value: 27.162999999999997
- type: recall_at_1
value: 37.819
- type: recall_at_10
value: 73.52499999999999
- type: recall_at_100
value: 84.875
- type: recall_at_1000
value: 92.559
- type: recall_at_3
value: 62.092999999999996
- type: recall_at_5
value: 67.907
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 94.60079999999999
- type: ap
value: 92.67396345347356
- type: f1
value: 94.5988098167121
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 21.285
- type: map_at_10
value: 33.436
- type: map_at_100
value: 34.63
- type: map_at_1000
value: 34.681
- type: map_at_3
value: 29.412
- type: map_at_5
value: 31.715
- type: mrr_at_1
value: 21.848
- type: mrr_at_10
value: 33.979
- type: mrr_at_100
value: 35.118
- type: mrr_at_1000
value: 35.162
- type: mrr_at_3
value: 30.036
- type: mrr_at_5
value: 32.298
- type: ndcg_at_1
value: 21.862000000000002
- type: ndcg_at_10
value: 40.43
- type: ndcg_at_100
value: 46.17
- type: ndcg_at_1000
value: 47.412
- type: ndcg_at_3
value: 32.221
- type: ndcg_at_5
value: 36.332
- type: precision_at_1
value: 21.862000000000002
- type: precision_at_10
value: 6.491
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 13.744
- type: precision_at_5
value: 10.331999999999999
- type: recall_at_1
value: 21.285
- type: recall_at_10
value: 62.083
- type: recall_at_100
value: 88.576
- type: recall_at_1000
value: 98.006
- type: recall_at_3
value: 39.729
- type: recall_at_5
value: 49.608000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.92612859097127
- type: f1
value: 93.82370333372853
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.67681036911807
- type: f1
value: 92.14191382411472
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.26817878585723
- type: f1
value: 91.92824250337878
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.96554963983714
- type: f1
value: 90.02859329630792
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 90.02509860164935
- type: f1
value: 89.30665159182062
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.55515370705244
- type: f1
value: 87.94449232331907
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 82.4623803009576
- type: f1
value: 66.06738378772725
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 79.3716539870386
- type: f1
value: 60.37614033396853
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.34022681787857
- type: f1
value: 58.302008026952
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.72095208268087
- type: f1
value: 59.64524724009049
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.87020437432773
- type: f1
value: 57.80202694670567
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.73598553345387
- type: f1
value: 58.19628250675031
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.6630800268998
- type: f1
value: 65.00996668051691
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.7128446536651
- type: f1
value: 57.95860594874963
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.61129791526563
- type: f1
value: 59.75328290206483
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.00134498991257
- type: f1
value: 67.0230483991802
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.54068594485541
- type: f1
value: 65.54604628946976
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.032952252858095
- type: f1
value: 58.715741857057104
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.80901143241427
- type: f1
value: 68.33963989243877
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.47141896435777
- type: f1
value: 69.56765020308262
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.2373907195696
- type: f1
value: 69.04529836036467
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 77.05783456624076
- type: f1
value: 74.69430584708174
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.82111634162744
- type: f1
value: 70.77228952803762
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.25353059852051
- type: f1
value: 71.05310103416411
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.28648285137861
- type: f1
value: 69.08020473732226
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.31540013449899
- type: f1
value: 70.9426355465791
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.2151983860121
- type: f1
value: 67.52541755908858
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.58372562205784
- type: f1
value: 69.49769064229827
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.9233355749832
- type: f1
value: 69.36311548259593
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.07330195023538
- type: f1
value: 64.99882022345572
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.62273032952253
- type: f1
value: 70.6394885471001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.77000672494957
- type: f1
value: 62.9368944815065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.453261600538
- type: f1
value: 70.85069934666681
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6906523201076
- type: f1
value: 72.03249740074217
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.03631472763953
- type: f1
value: 59.3165215571852
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.913920645595155
- type: f1
value: 57.367337711611285
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.42837928715535
- type: f1
value: 52.60527294970906
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.33490248823135
- type: f1
value: 63.213340969404065
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.58507061197041
- type: f1
value: 68.40256628040486
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.11230665770006
- type: f1
value: 66.44863577842305
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.70073974445192
- type: f1
value: 67.21291337273702
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.43913920645595
- type: f1
value: 64.09838087422806
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.80026899798251
- type: f1
value: 68.76986742962444
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.78816408876934
- type: f1
value: 62.18781873428972
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.6577000672495
- type: f1
value: 68.75171511133003
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.42501681237391
- type: f1
value: 71.18434963451544
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.64828513786146
- type: f1
value: 70.67741914007422
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.62811028917284
- type: f1
value: 71.36402039740959
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.88634835238736
- type: f1
value: 69.23701923480677
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.15938130464022
- type: f1
value: 71.87792218993388
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.96301277740416
- type: f1
value: 67.29584200202983
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.49562878278412
- type: f1
value: 66.91716685679431
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 74.6805648957633
- type: f1
value: 72.02723592594374
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.00605245460659
- type: f1
value: 60.16716669482932
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.90988567585742
- type: f1
value: 63.99405488777784
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.62273032952253
- type: f1
value: 65.17213906909481
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.50907868190988
- type: f1
value: 69.15165697194853
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.30733019502352
- type: f1
value: 66.69024007380474
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.24277067921989
- type: f1
value: 68.80515408492947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.49831876260929
- type: f1
value: 64.83778567111116
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.28782784129119
- type: f1
value: 69.3294186700733
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.315400134499
- type: f1
value: 71.22674385243207
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.37794216543377
- type: f1
value: 68.96962492838232
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.33557498318764
- type: f1
value: 72.28949738478356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 65.84398117014123
- type: f1
value: 64.71026362091463
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.76462676529925
- type: f1
value: 69.8229667407667
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.02420981842636
- type: f1
value: 71.76576384895898
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.7572293207801
- type: f1
value: 72.76840765295256
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.02286482851379
- type: f1
value: 66.17237947327872
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.60928043039678
- type: f1
value: 77.27094731234773
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.68325487558843
- type: f1
value: 77.97530399082261
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.13315400134498
- type: f1
value: 75.97558584796424
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.47410894418292
- type: f1
value: 80.52244841473792
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.9670477471419
- type: f1
value: 77.37318805793146
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.09683927370544
- type: f1
value: 77.69773737430847
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.20847343644922
- type: f1
value: 75.17071738727348
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07464694014796
- type: f1
value: 77.16136207698571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.53396099529255
- type: f1
value: 73.58296404484122
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.75319435104237
- type: f1
value: 75.24674707850833
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.0948217888366
- type: f1
value: 76.47559490205028
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.07599193006052
- type: f1
value: 70.76028043093511
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.10490921318089
- type: f1
value: 77.01215275283272
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.25756556825824
- type: f1
value: 70.20605314648762
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.08137188971082
- type: f1
value: 77.3899269057439
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.35440484196369
- type: f1
value: 79.58964690002772
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.42299932750504
- type: f1
value: 68.07844356925413
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.15669132481507
- type: f1
value: 65.89383352608513
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.11432414256894
- type: f1
value: 57.69910594559806
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.24747814391392
- type: f1
value: 70.42455553830918
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46267652992603
- type: f1
value: 76.8854559308316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.24815063887021
- type: f1
value: 72.77805034658074
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11566913248151
- type: f1
value: 73.86147988001356
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.0168123739072
- type: f1
value: 69.38515920054571
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.41156691324814
- type: f1
value: 73.43474953408237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.39609952925353
- type: f1
value: 67.29731681109291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.20914593140552
- type: f1
value: 77.07066497935367
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.52387357094821
- type: f1
value: 78.5259569473291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.6913248150639
- type: f1
value: 76.91201656350455
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.1217215870881
- type: f1
value: 77.41179937912504
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.25891055817083
- type: f1
value: 75.8089244542887
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.70679219905851
- type: f1
value: 78.21459594517711
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.83523873570948
- type: f1
value: 74.86847028401978
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.71755211835911
- type: f1
value: 74.0214326485662
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.06523201075991
- type: f1
value: 79.10545620325138
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.91862811028918
- type: f1
value: 66.50386121217983
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.93140551445865
- type: f1
value: 70.755435928495
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.40753194351042
- type: f1
value: 71.61816115782923
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.1815736381977
- type: f1
value: 75.08016717887205
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.86482851378614
- type: f1
value: 72.39521180006291
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.46940147948891
- type: f1
value: 76.70044085362349
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.89307330195024
- type: f1
value: 71.5721825332298
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.7511768661735
- type: f1
value: 75.17918654541515
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 78.69535978480162
- type: f1
value: 78.90019070153316
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.45729657027572
- type: f1
value: 76.19578371794672
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 36.92715354123554
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 35.53536244162518
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.08507884504006
- type: mrr
value: 34.32436977159129
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.935
- type: map_at_10
value: 13.297
- type: map_at_100
value: 16.907
- type: map_at_1000
value: 18.391
- type: map_at_3
value: 9.626999999999999
- type: map_at_5
value: 11.190999999999999
- type: mrr_at_1
value: 46.129999999999995
- type: mrr_at_10
value: 54.346000000000004
- type: mrr_at_100
value: 55.067
- type: mrr_at_1000
value: 55.1
- type: mrr_at_3
value: 51.961
- type: mrr_at_5
value: 53.246
- type: ndcg_at_1
value: 44.118
- type: ndcg_at_10
value: 35.534
- type: ndcg_at_100
value: 32.946999999999996
- type: ndcg_at_1000
value: 41.599000000000004
- type: ndcg_at_3
value: 40.25
- type: ndcg_at_5
value: 37.978
- type: precision_at_1
value: 46.129999999999995
- type: precision_at_10
value: 26.842
- type: precision_at_100
value: 8.427
- type: precision_at_1000
value: 2.128
- type: precision_at_3
value: 37.977
- type: precision_at_5
value: 32.879000000000005
- type: recall_at_1
value: 5.935
- type: recall_at_10
value: 17.211000000000002
- type: recall_at_100
value: 34.33
- type: recall_at_1000
value: 65.551
- type: recall_at_3
value: 10.483
- type: recall_at_5
value: 13.078999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 35.231
- type: map_at_10
value: 50.202000000000005
- type: map_at_100
value: 51.154999999999994
- type: map_at_1000
value: 51.181
- type: map_at_3
value: 45.774
- type: map_at_5
value: 48.522
- type: mrr_at_1
value: 39.687
- type: mrr_at_10
value: 52.88
- type: mrr_at_100
value: 53.569
- type: mrr_at_1000
value: 53.58500000000001
- type: mrr_at_3
value: 49.228
- type: mrr_at_5
value: 51.525
- type: ndcg_at_1
value: 39.687
- type: ndcg_at_10
value: 57.754000000000005
- type: ndcg_at_100
value: 61.597
- type: ndcg_at_1000
value: 62.18900000000001
- type: ndcg_at_3
value: 49.55
- type: ndcg_at_5
value: 54.11899999999999
- type: precision_at_1
value: 39.687
- type: precision_at_10
value: 9.313
- type: precision_at_100
value: 1.146
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 22.229
- type: precision_at_5
value: 15.939
- type: recall_at_1
value: 35.231
- type: recall_at_10
value: 78.083
- type: recall_at_100
value: 94.42099999999999
- type: recall_at_1000
value: 98.81
- type: recall_at_3
value: 57.047000000000004
- type: recall_at_5
value: 67.637
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.241
- type: map_at_10
value: 85.462
- type: map_at_100
value: 86.083
- type: map_at_1000
value: 86.09700000000001
- type: map_at_3
value: 82.49499999999999
- type: map_at_5
value: 84.392
- type: mrr_at_1
value: 82.09
- type: mrr_at_10
value: 88.301
- type: mrr_at_100
value: 88.383
- type: mrr_at_1000
value: 88.384
- type: mrr_at_3
value: 87.37
- type: mrr_at_5
value: 88.035
- type: ndcg_at_1
value: 82.12
- type: ndcg_at_10
value: 89.149
- type: ndcg_at_100
value: 90.235
- type: ndcg_at_1000
value: 90.307
- type: ndcg_at_3
value: 86.37599999999999
- type: ndcg_at_5
value: 87.964
- type: precision_at_1
value: 82.12
- type: precision_at_10
value: 13.56
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.88
- type: precision_at_5
value: 24.92
- type: recall_at_1
value: 71.241
- type: recall_at_10
value: 96.128
- type: recall_at_100
value: 99.696
- type: recall_at_1000
value: 99.994
- type: recall_at_3
value: 88.181
- type: recall_at_5
value: 92.694
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.59757799655151
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 64.27391998854624
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.243
- type: map_at_10
value: 10.965
- type: map_at_100
value: 12.934999999999999
- type: map_at_1000
value: 13.256
- type: map_at_3
value: 7.907
- type: map_at_5
value: 9.435
- type: mrr_at_1
value: 20.9
- type: mrr_at_10
value: 31.849
- type: mrr_at_100
value: 32.964
- type: mrr_at_1000
value: 33.024
- type: mrr_at_3
value: 28.517
- type: mrr_at_5
value: 30.381999999999998
- type: ndcg_at_1
value: 20.9
- type: ndcg_at_10
value: 18.723
- type: ndcg_at_100
value: 26.384999999999998
- type: ndcg_at_1000
value: 32.114
- type: ndcg_at_3
value: 17.753
- type: ndcg_at_5
value: 15.558
- type: precision_at_1
value: 20.9
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 2.078
- type: precision_at_1000
value: 0.345
- type: precision_at_3
value: 16.900000000000002
- type: precision_at_5
value: 13.88
- type: recall_at_1
value: 4.243
- type: recall_at_10
value: 19.885
- type: recall_at_100
value: 42.17
- type: recall_at_1000
value: 70.12
- type: recall_at_3
value: 10.288
- type: recall_at_5
value: 14.072000000000001
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.84209174935282
- type: cos_sim_spearman
value: 81.73248048438833
- type: euclidean_pearson
value: 83.02810070308149
- type: euclidean_spearman
value: 81.73248295679514
- type: manhattan_pearson
value: 82.95368060376002
- type: manhattan_spearman
value: 81.60277910998718
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 88.52628804556943
- type: cos_sim_spearman
value: 82.5713913555672
- type: euclidean_pearson
value: 85.8796774746988
- type: euclidean_spearman
value: 82.57137506803424
- type: manhattan_pearson
value: 85.79671002960058
- type: manhattan_spearman
value: 82.49445981618027
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 86.23682503505542
- type: cos_sim_spearman
value: 87.15008956711806
- type: euclidean_pearson
value: 86.79805401524959
- type: euclidean_spearman
value: 87.15008956711806
- type: manhattan_pearson
value: 86.65298502699244
- type: manhattan_spearman
value: 86.97677821948562
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.63370304677802
- type: cos_sim_spearman
value: 84.97105553540318
- type: euclidean_pearson
value: 85.28896108687721
- type: euclidean_spearman
value: 84.97105553540318
- type: manhattan_pearson
value: 85.09663190337331
- type: manhattan_spearman
value: 84.79126831644619
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 90.2614838800733
- type: cos_sim_spearman
value: 91.0509162991835
- type: euclidean_pearson
value: 90.33098317533373
- type: euclidean_spearman
value: 91.05091625871644
- type: manhattan_pearson
value: 90.26250435151107
- type: manhattan_spearman
value: 90.97999594417519
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.80480973335091
- type: cos_sim_spearman
value: 87.313695492969
- type: euclidean_pearson
value: 86.49267251576939
- type: euclidean_spearman
value: 87.313695492969
- type: manhattan_pearson
value: 86.44019901831935
- type: manhattan_spearman
value: 87.24205395460392
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 90.05662789380672
- type: cos_sim_spearman
value: 90.02759424426651
- type: euclidean_pearson
value: 90.4042483422981
- type: euclidean_spearman
value: 90.02759424426651
- type: manhattan_pearson
value: 90.51446975000226
- type: manhattan_spearman
value: 90.08832889933616
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 67.5975528273532
- type: cos_sim_spearman
value: 67.62969861411354
- type: euclidean_pearson
value: 69.224275734323
- type: euclidean_spearman
value: 67.62969861411354
- type: manhattan_pearson
value: 69.3761447059927
- type: manhattan_spearman
value: 67.90921005611467
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.11244327231684
- type: cos_sim_spearman
value: 88.37902438979035
- type: euclidean_pearson
value: 87.86054279847336
- type: euclidean_spearman
value: 88.37902438979035
- type: manhattan_pearson
value: 87.77257757320378
- type: manhattan_spearman
value: 88.25208966098123
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 85.87174608143563
- type: mrr
value: 96.12836872640794
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.760999999999996
- type: map_at_10
value: 67.258
- type: map_at_100
value: 67.757
- type: map_at_1000
value: 67.78800000000001
- type: map_at_3
value: 64.602
- type: map_at_5
value: 65.64
- type: mrr_at_1
value: 60.667
- type: mrr_at_10
value: 68.441
- type: mrr_at_100
value: 68.825
- type: mrr_at_1000
value: 68.853
- type: mrr_at_3
value: 66.444
- type: mrr_at_5
value: 67.26100000000001
- type: ndcg_at_1
value: 60.667
- type: ndcg_at_10
value: 71.852
- type: ndcg_at_100
value: 73.9
- type: ndcg_at_1000
value: 74.628
- type: ndcg_at_3
value: 67.093
- type: ndcg_at_5
value: 68.58
- type: precision_at_1
value: 60.667
- type: precision_at_10
value: 9.6
- type: precision_at_100
value: 1.0670000000000002
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.111
- type: precision_at_5
value: 16.733
- type: recall_at_1
value: 57.760999999999996
- type: recall_at_10
value: 84.967
- type: recall_at_100
value: 93.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.589
- type: recall_at_5
value: 75.483
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.66633663366336
- type: cos_sim_ap
value: 91.17685358899108
- type: cos_sim_f1
value: 82.16818642350559
- type: cos_sim_precision
value: 83.26488706365504
- type: cos_sim_recall
value: 81.10000000000001
- type: dot_accuracy
value: 99.66633663366336
- type: dot_ap
value: 91.17663411119032
- type: dot_f1
value: 82.16818642350559
- type: dot_precision
value: 83.26488706365504
- type: dot_recall
value: 81.10000000000001
- type: euclidean_accuracy
value: 99.66633663366336
- type: euclidean_ap
value: 91.17685189882275
- type: euclidean_f1
value: 82.16818642350559
- type: euclidean_precision
value: 83.26488706365504
- type: euclidean_recall
value: 81.10000000000001
- type: manhattan_accuracy
value: 99.66633663366336
- type: manhattan_ap
value: 91.2241619496737
- type: manhattan_f1
value: 82.20472440944883
- type: manhattan_precision
value: 86.51933701657458
- type: manhattan_recall
value: 78.3
- type: max_accuracy
value: 99.66633663366336
- type: max_ap
value: 91.2241619496737
- type: max_f1
value: 82.20472440944883
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 66.85101268897951
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 42.461184054706905
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 51.44542568873886
- type: mrr
value: 52.33656151854681
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.75982974997539
- type: cos_sim_spearman
value: 30.385405026539914
- type: dot_pearson
value: 30.75982433546523
- type: dot_spearman
value: 30.385405026539914
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22799999999999998
- type: map_at_10
value: 2.064
- type: map_at_100
value: 13.056000000000001
- type: map_at_1000
value: 31.747999999999998
- type: map_at_3
value: 0.67
- type: map_at_5
value: 1.097
- type: mrr_at_1
value: 90.0
- type: mrr_at_10
value: 94.667
- type: mrr_at_100
value: 94.667
- type: mrr_at_1000
value: 94.667
- type: mrr_at_3
value: 94.667
- type: mrr_at_5
value: 94.667
- type: ndcg_at_1
value: 86.0
- type: ndcg_at_10
value: 82.0
- type: ndcg_at_100
value: 64.307
- type: ndcg_at_1000
value: 57.023999999999994
- type: ndcg_at_3
value: 85.816
- type: ndcg_at_5
value: 84.904
- type: precision_at_1
value: 90.0
- type: precision_at_10
value: 85.8
- type: precision_at_100
value: 66.46
- type: precision_at_1000
value: 25.202
- type: precision_at_3
value: 90.0
- type: precision_at_5
value: 89.2
- type: recall_at_1
value: 0.22799999999999998
- type: recall_at_10
value: 2.235
- type: recall_at_100
value: 16.185
- type: recall_at_1000
value: 53.620999999999995
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.172
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.75
- type: precision
value: 96.45
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.54913294797689
- type: f1
value: 82.46628131021194
- type: precision
value: 81.1175337186898
- type: recall
value: 85.54913294797689
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.21951219512195
- type: f1
value: 77.33333333333334
- type: precision
value: 75.54878048780488
- type: recall
value: 81.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.26666666666665
- type: precision
value: 98.1
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.5
- type: f1
value: 99.33333333333333
- type: precision
value: 99.25
- type: recall
value: 99.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.2
- type: precision
value: 96.89999999999999
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.18333333333334
- type: precision
value: 96.88333333333333
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.61194029850746
- type: f1
value: 72.81094527363183
- type: precision
value: 70.83333333333333
- type: recall
value: 77.61194029850746
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.91666666666667
- type: precision
value: 91.08333333333334
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.29268292682927
- type: f1
value: 85.27642276422765
- type: precision
value: 84.01277584204414
- type: recall
value: 88.29268292682927
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.0
- type: precision
value: 94.46666666666668
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.681652490887
- type: f1
value: 91.90765492102065
- type: precision
value: 91.05913325232888
- type: recall
value: 93.681652490887
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.17391304347827
- type: f1
value: 89.97101449275361
- type: precision
value: 88.96811594202899
- type: recall
value: 92.17391304347827
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.43478260869566
- type: f1
value: 87.72173913043478
- type: precision
value: 86.42028985507245
- type: recall
value: 90.43478260869566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.03
- type: precision
value: 86.95
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.4
- type: f1
value: 91.45666666666666
- type: precision
value: 90.525
- type: recall
value: 93.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.9059107358263
- type: f1
value: 78.32557872364869
- type: precision
value: 76.78260286824823
- type: recall
value: 81.9059107358263
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.58333333333333
- type: precision
value: 91.73333333333332
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.10000000000001
- type: f1
value: 74.50500000000001
- type: precision
value: 72.58928571428571
- type: recall
value: 79.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.55
- type: precision
value: 95.05
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.0952380952381
- type: f1
value: 77.98458049886621
- type: precision
value: 76.1968253968254
- type: recall
value: 82.0952380952381
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.9
- type: f1
value: 84.99190476190476
- type: precision
value: 83.65
- type: recall
value: 87.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.56666666666666
- type: precision
value: 94.01666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.6
- type: f1
value: 98.2
- type: precision
value: 98.0
- type: recall
value: 98.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.38333333333334
- type: precision
value: 93.78333333333335
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.4
- type: f1
value: 84.10380952380952
- type: precision
value: 82.67
- type: recall
value: 87.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.5
- type: f1
value: 94.33333333333334
- type: precision
value: 93.78333333333333
- type: recall
value: 95.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.4
- type: f1
value: 86.82000000000001
- type: precision
value: 85.64500000000001
- type: recall
value: 89.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.1
- type: f1
value: 93.56666666666668
- type: precision
value: 92.81666666666666
- type: recall
value: 95.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.9
- type: f1
value: 98.6
- type: precision
value: 98.45
- type: recall
value: 98.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.01347708894879
- type: f1
value: 93.51752021563343
- type: precision
value: 92.82794249775381
- type: recall
value: 95.01347708894879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.00854700854701
- type: f1
value: 96.08262108262107
- type: precision
value: 95.65527065527067
- type: recall
value: 97.00854700854701
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5
- type: f1
value: 95.39999999999999
- type: precision
value: 94.88333333333333
- type: recall
value: 96.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.5909090909091
- type: f1
value: 95.49242424242425
- type: precision
value: 94.9621212121212
- type: recall
value: 96.5909090909091
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.90566037735849
- type: f1
value: 81.85883997204752
- type: precision
value: 80.54507337526205
- type: recall
value: 84.90566037735849
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.5
- type: f1
value: 96.75
- type: precision
value: 96.38333333333333
- type: recall
value: 97.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7704280155642
- type: f1
value: 82.99610894941635
- type: precision
value: 81.32295719844358
- type: recall
value: 86.7704280155642
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.52136752136752
- type: f1
value: 61.89662189662191
- type: precision
value: 59.68660968660969
- type: recall
value: 67.52136752136752
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.2
- type: f1
value: 86.32
- type: precision
value: 85.015
- type: recall
value: 89.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.0
- type: f1
value: 94.78333333333333
- type: precision
value: 94.18333333333334
- type: recall
value: 96.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.8785046728972
- type: f1
value: 80.54517133956385
- type: precision
value: 79.154984423676
- type: recall
value: 83.8785046728972
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.60000000000001
- type: f1
value: 92.01333333333334
- type: precision
value: 91.28333333333333
- type: recall
value: 93.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.1
- type: f1
value: 96.26666666666667
- type: precision
value: 95.85000000000001
- type: recall
value: 97.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.3
- type: f1
value: 80.67833333333333
- type: precision
value: 79.03928571428571
- type: recall
value: 84.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.3
- type: f1
value: 96.48333333333332
- type: precision
value: 96.08333333333331
- type: recall
value: 97.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.66666666666667
- type: precision
value: 94.16666666666667
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.36666666666667
- type: precision
value: 95.96666666666668
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.80666666666667
- type: precision
value: 92.12833333333333
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.22333333333334
- type: precision
value: 95.875
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.33333333333333
- type: f1
value: 70.78174603174602
- type: precision
value: 69.28333333333332
- type: recall
value: 74.33333333333333
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.6
- type: f1
value: 32.938348952090365
- type: precision
value: 31.2811038961039
- type: recall
value: 37.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.5
- type: f1
value: 89.13333333333333
- type: precision
value: 88.03333333333333
- type: recall
value: 91.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.14285714285714
- type: f1
value: 77.67857142857143
- type: precision
value: 75.59523809523809
- type: recall
value: 82.14285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.0450054884742
- type: f1
value: 63.070409283362075
- type: precision
value: 60.58992781824835
- type: recall
value: 69.0450054884742
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.1
- type: f1
value: 57.848333333333336
- type: precision
value: 55.69500000000001
- type: recall
value: 63.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.01666666666667
- type: precision
value: 94.5
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.90666666666667
- type: precision
value: 94.425
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.61333333333333
- type: precision
value: 83.27
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.4
- type: f1
value: 71.90746031746032
- type: precision
value: 70.07027777777778
- type: recall
value: 76.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.26666666666667
- type: precision
value: 96.95
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 78.8
- type: f1
value: 74.39555555555555
- type: precision
value: 72.59416666666667
- type: recall
value: 78.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 93.78999999999999
- type: precision
value: 93.125
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.75
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.25666666666666
- type: precision
value: 93.64166666666668
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 56.934306569343065
- type: f1
value: 51.461591936044485
- type: precision
value: 49.37434827945776
- type: recall
value: 56.934306569343065
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 20.200000000000003
- type: f1
value: 16.91799284049284
- type: precision
value: 15.791855158730158
- type: recall
value: 20.200000000000003
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.2
- type: f1
value: 95.3
- type: precision
value: 94.85
- type: recall
value: 96.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.3
- type: f1
value: 95.11666666666667
- type: precision
value: 94.53333333333333
- type: recall
value: 96.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.88095238095238
- type: f1
value: 87.14285714285714
- type: precision
value: 85.96230158730161
- type: recall
value: 89.88095238095238
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 24.099999999999998
- type: f1
value: 19.630969083349783
- type: precision
value: 18.275094905094907
- type: recall
value: 24.099999999999998
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.4368530020704
- type: f1
value: 79.45183870649709
- type: precision
value: 77.7432712215321
- type: recall
value: 83.4368530020704
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.53333333333333
- type: precision
value: 93.91666666666666
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.8
- type: f1
value: 98.48333333333332
- type: precision
value: 98.33333333333334
- type: recall
value: 98.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.5
- type: f1
value: 14.979285714285714
- type: precision
value: 14.23235060690943
- type: recall
value: 17.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.93939393939394
- type: f1
value: 91.991341991342
- type: precision
value: 91.05339105339105
- type: recall
value: 93.93939393939394
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.31297709923665
- type: f1
value: 86.76844783715012
- type: precision
value: 85.63613231552164
- type: recall
value: 89.31297709923665
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.12663755458514
- type: f1
value: 98.93255701115964
- type: precision
value: 98.83551673944687
- type: recall
value: 99.12663755458514
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.0
- type: f1
value: 89.77999999999999
- type: precision
value: 88.78333333333333
- type: recall
value: 92.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.89265536723164
- type: f1
value: 95.85687382297553
- type: precision
value: 95.33898305084746
- type: recall
value: 96.89265536723164
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 14.6
- type: f1
value: 11.820611790170615
- type: precision
value: 11.022616224355355
- type: recall
value: 14.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.93333333333334
- type: precision
value: 94.48666666666666
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.6
- type: f1
value: 84.72333333333334
- type: precision
value: 83.44166666666666
- type: recall
value: 87.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.47333333333333
- type: precision
value: 92.875
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.6
- type: f1
value: 95.71666666666665
- type: precision
value: 95.28333333333335
- type: recall
value: 96.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 17.8
- type: f1
value: 14.511074040901628
- type: precision
value: 13.503791000666002
- type: recall
value: 17.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.10187667560321
- type: f1
value: 92.46648793565683
- type: precision
value: 91.71134941912423
- type: recall
value: 94.10187667560321
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.0
- type: f1
value: 96.11666666666666
- type: precision
value: 95.68333333333334
- type: recall
value: 97.0
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 72.72727272727273
- type: f1
value: 66.58949745906267
- type: precision
value: 63.86693017127799
- type: recall
value: 72.72727272727273
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.14084507042254
- type: f1
value: 88.26291079812206
- type: precision
value: 87.32394366197182
- type: recall
value: 90.14084507042254
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.67065868263472
- type: f1
value: 58.2876627696987
- type: precision
value: 55.79255774165953
- type: recall
value: 64.67065868263472
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.6
- type: f1
value: 94.41666666666667
- type: precision
value: 93.85
- type: recall
value: 95.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 55.172413793103445
- type: f1
value: 49.63992493549144
- type: precision
value: 47.71405113769646
- type: recall
value: 55.172413793103445
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.46478873239437
- type: f1
value: 73.4417616811983
- type: precision
value: 71.91607981220658
- type: recall
value: 77.46478873239437
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 84.61538461538461
- type: f1
value: 80.91452991452994
- type: precision
value: 79.33760683760683
- type: recall
value: 84.61538461538461
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2
- type: f1
value: 97.6
- type: precision
value: 97.3
- type: recall
value: 98.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 75.5741127348643
- type: f1
value: 72.00417536534445
- type: precision
value: 70.53467872883321
- type: recall
value: 75.5741127348643
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 62.2
- type: f1
value: 55.577460317460314
- type: precision
value: 52.98583333333333
- type: recall
value: 62.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.18241042345277
- type: f1
value: 90.6468124709167
- type: precision
value: 89.95656894679696
- type: recall
value: 92.18241042345277
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.13333333333333
- type: precision
value: 94.66666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 95.85000000000001
- type: precision
value: 95.39999999999999
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.1259842519685
- type: f1
value: 89.76377952755905
- type: precision
value: 88.71391076115485
- type: recall
value: 92.1259842519685
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.49
- type: precision
value: 91.725
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.5623268698061
- type: f1
value: 73.27364463791058
- type: precision
value: 71.51947852086357
- type: recall
value: 77.5623268698061
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.56666666666666
- type: precision
value: 96.16666666666667
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 66.34615384615384
- type: f1
value: 61.092032967032964
- type: precision
value: 59.27197802197802
- type: recall
value: 66.34615384615384
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.41190476190476
- type: precision
value: 92.7
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.10000000000001
- type: f1
value: 91.10000000000001
- type: precision
value: 90.13333333333333
- type: recall
value: 93.10000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.97333333333334
- type: precision
value: 91.14166666666667
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.21698113207547
- type: f1
value: 90.3796046720575
- type: precision
value: 89.56367924528303
- type: recall
value: 92.21698113207547
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.6
- type: f1
value: 96.91666666666667
- type: precision
value: 96.6
- type: recall
value: 97.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.44525547445255
- type: f1
value: 96.71532846715328
- type: precision
value: 96.35036496350365
- type: recall
value: 97.44525547445255
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.34000000000002
- type: precision
value: 91.49166666666667
- type: recall
value: 94.1
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.2910000000000004
- type: map_at_10
value: 10.373000000000001
- type: map_at_100
value: 15.612
- type: map_at_1000
value: 17.06
- type: map_at_3
value: 6.119
- type: map_at_5
value: 7.917000000000001
- type: mrr_at_1
value: 44.897999999999996
- type: mrr_at_10
value: 56.054
- type: mrr_at_100
value: 56.82000000000001
- type: mrr_at_1000
value: 56.82000000000001
- type: mrr_at_3
value: 52.381
- type: mrr_at_5
value: 53.81
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 27.249000000000002
- type: ndcg_at_100
value: 36.529
- type: ndcg_at_1000
value: 48.136
- type: ndcg_at_3
value: 33.938
- type: ndcg_at_5
value: 29.951
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 22.653000000000002
- type: precision_at_100
value: 7.000000000000001
- type: precision_at_1000
value: 1.48
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 27.755000000000003
- type: recall_at_1
value: 3.2910000000000004
- type: recall_at_10
value: 16.16
- type: recall_at_100
value: 43.908
- type: recall_at_1000
value: 79.823
- type: recall_at_3
value: 7.156
- type: recall_at_5
value: 10.204
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.05879999999999
- type: ap
value: 14.609748142799111
- type: f1
value: 54.878956295843096
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 64.61799660441426
- type: f1
value: 64.8698191961434
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.32860036611885
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 88.34714192048638
- type: cos_sim_ap
value: 80.26732975975634
- type: cos_sim_f1
value: 73.53415148134374
- type: cos_sim_precision
value: 69.34767360299276
- type: cos_sim_recall
value: 78.25857519788919
- type: dot_accuracy
value: 88.34714192048638
- type: dot_ap
value: 80.26733698491206
- type: dot_f1
value: 73.53415148134374
- type: dot_precision
value: 69.34767360299276
- type: dot_recall
value: 78.25857519788919
- type: euclidean_accuracy
value: 88.34714192048638
- type: euclidean_ap
value: 80.26734337771738
- type: euclidean_f1
value: 73.53415148134374
- type: euclidean_precision
value: 69.34767360299276
- type: euclidean_recall
value: 78.25857519788919
- type: manhattan_accuracy
value: 88.30541813196639
- type: manhattan_ap
value: 80.19415808104145
- type: manhattan_f1
value: 73.55143870713441
- type: manhattan_precision
value: 73.25307511122743
- type: manhattan_recall
value: 73.85224274406332
- type: max_accuracy
value: 88.34714192048638
- type: max_ap
value: 80.26734337771738
- type: max_f1
value: 73.55143870713441
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.81061047075717
- type: cos_sim_ap
value: 87.11747055081017
- type: cos_sim_f1
value: 80.04355498817256
- type: cos_sim_precision
value: 78.1165262000733
- type: cos_sim_recall
value: 82.06806282722513
- type: dot_accuracy
value: 89.81061047075717
- type: dot_ap
value: 87.11746902745236
- type: dot_f1
value: 80.04355498817256
- type: dot_precision
value: 78.1165262000733
- type: dot_recall
value: 82.06806282722513
- type: euclidean_accuracy
value: 89.81061047075717
- type: euclidean_ap
value: 87.11746919324248
- type: euclidean_f1
value: 80.04355498817256
- type: euclidean_precision
value: 78.1165262000733
- type: euclidean_recall
value: 82.06806282722513
- type: manhattan_accuracy
value: 89.79508673885202
- type: manhattan_ap
value: 87.11074390832218
- type: manhattan_f1
value: 80.13002540726349
- type: manhattan_precision
value: 77.83826945412311
- type: manhattan_recall
value: 82.56082537727133
- type: max_accuracy
value: 89.81061047075717
- type: max_ap
value: 87.11747055081017
- type: max_f1
value: 80.13002540726349
---
# yoeven/multilingual-e5-large-instruct-Q5_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large-instruct`](https://huggingface.co/intfloat/multilingual-e5-large-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo yoeven/multilingual-e5-large-instruct-Q5_0-GGUF --hf-file multilingual-e5-large-instruct-q5_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
NouRed/medqsum-bart-large-xsum-meqsum | NouRed | summarization | [
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"medical question answering",
"medical question understanding",
"consumer health question",
"prompt engineering",
"LLM",
"en",
"dataset:bigbio/meqsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-08-19T18:42:56 | 2024-01-08T16:17:24 | 41 | 1 | ---
datasets:
- bigbio/meqsum
language: en
library_name: transformers
license: apache-2.0
tags:
- summarization
- bart
- medical question answering
- medical question understanding
- consumer health question
- prompt engineering
- LLM
widget:
- text: ' SUBJECT: high inner eye pressure above 21 possible glaucoma MESSAGE: have
seen inner eye pressure increase as I have begin taking Rizatriptan. I understand
the med narrows blood vessels. Can this med. cause or effect the closed or wide
angle issues with the eyelense/glacoma.'
model-index:
- name: medqsum-bart-large-xsum-meqsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: Dataset for medical question summarization
type: bigbio/meqsum
split: valid
metrics:
- type: rogue-1
value: 54.32
name: Validation ROGUE-1
- type: rogue-2
value: 38.08
name: Validation ROGUE-2
- type: rogue-l
value: 51.98
name: Validation ROGUE-L
- type: rogue-l-sum
value: 51.99
name: Validation ROGUE-L-SUM
---
[](https://github.com/zekaouinoureddine/MedQSum)
## MedQSum
<a href="https://github.com/zekaouinoureddine/MedQSum">
<img src="https://raw.githubusercontent.com/zekaouinoureddine/MedQSum/master/assets/models.png" alt="drawing" width="600"/>
</a>
## TL;DR
**medqsum-bart-large-xsum-meqsum** is the best fine-tuned model in the paper [Enhancing Large Language Models' Utility for Medical Question-Answering: A Patient Health Question Summarization Approach](https://doi.org/10.1109/SITA60746.2023.10373720), which introduces a solution to get the most out of LLMs, when answering health-related questions. We address the challenge of crafting accurate prompts by summarizing consumer health questions (CHQs) to generate clear and concise medical questions. Our approach involves fine-tuning Transformer-based models, including Flan-T5 in resource-constrained environments and three medical question summarization datasets.
## Hyperparameters
```json
{
"dataset_name": "MeQSum",
"learning_rate": 3e-05,
"model_name_or_path": "facebook/bart-large-xsum",
"num_train_epochs": 4,
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
}
```
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="NouRed/medqsum-bart-large-xsum-meqsum")
chq = '''SUBJECT: high inner eye pressure above 21 possible glaucoma
MESSAGE: have seen inner eye pressure increase as I have begin taking
Rizatriptan. I understand the med narrows blood vessels. Can this med.
cause or effect the closed or wide angle issues with the eyelense/glacoma.
'''
summarizer(chq)
```
## Results
| key | value |
| --- | ----- |
| eval_rouge1 | 54.32 |
| eval_rouge2 | 38.08 |
| eval_rougeL | 51.98 |
| eval_rougeLsum | 51.99 |
## Cite This
```
@INPROCEEDINGS{10373720,
author={Zekaoui, Nour Eddine and Yousfi, Siham and Mikram, Mounia and Rhanoui, Maryem},
booktitle={2023 14th International Conference on Intelligent Systems: Theories and Applications (SITA)},
title={Enhancing Large Language Models’ Utility for Medical Question-Answering: A Patient Health Question Summarization Approach},
year={2023},
volume={},
number={},
pages={1-8},
doi={10.1109/SITA60746.2023.10373720}}
``` | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"MEQSUM"
] |
chillymiao/Hyacinth6B | chillymiao | text-generation | [
"transformers",
"pytorch",
"chatglm",
"text-generation",
"custom_code",
"zh",
"arxiv:2403.13334",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-12-01T08:01:38 | 2024-04-12T07:00:18 | 41 | 1 | ---
language:
- zh
license: apache-2.0
pipeline_tag: text-generation
---
# Hyacinth6B: A Trandidional Chinese Large Language Model
<img src="./pics/hyacinth.jpeg" alt="image_name png"/>
Hyacinth6B is a Tranditional Chinese Large Language Model which fine-tune from [chatglm3-base](https://huggingface.co/THUDM/chatglm3-6b-base),our goal is to find a balance between model lightness and performance, striving to maximize performance while using a comparatively lightweight model. Hyacinth6B was developed with this objective in mind, aiming to fully leverage the core capabilities of LLMs without incurring substantial resource costs, effectively pushing the boundaries of smaller models' performance. The training approach involves parameter-efficient fine-tuning using the Low-Rank Adaptation (LoRA) method.
At last, we evaluated Hyacinth6B, examining its performance across various aspects. Hyacinth6B shows commendable performance in certain metrics, even surpassing ChatGPT in two categories. We look forward to providing more resources and possibilities for the field of Traditional Chinese language processing. This research aims to expand the research scope of Traditional Chinese language models and enhance their applicability in different scenarios.
# Training Config
Training required approximately 20.6GB of VRAM without any quantization (default fp16) and a total of 369 hours in duration on single RTX 4090.
| HyperParameter | Value |
| --------- | ----- |
| Batch Size| 8 |
|Learning Rate |5e-5 |
|Epochs |3 |
|LoRA r| 16 |
# Evaluate Results
## CMMLU
<img src="./pics/cmmlu.png" alt="image_name png"/>
## C-eval
<img src="./pics/ceval.png" alt="image_name png"/>
## TC-eval by MediaTek Research
<img src="./pics/tc-eval.png" alt="image_name png"/>
## MT-bench
<img src="./pics/dashB.png" alt="image_name png"/>
## LLM-eval by NTU Miu Lab
<img src="./pics/llmeval.png" alt="image_name png"/>
## Bailong Bench
| Bailong-bench| Taiwan-LLM-7B-v2.1-chat |Taiwan-LLM-13B-v2.0-chat |gpt-3.5-turbo-1103|Bailong-instruct 7B|Hyacinth6B(ours)|
| -------- | -------- | --- | --- | --- | -------- |
|Arithmetic|9.0|10.0|10.0|9.2|8.4|
|Copywriting generation|7.6|3.0|9.0|9.6|10.0 |
|Creative writing|6.1|7.5 |8.7 |9.4 |8.3 |
|English instruction| 6.0| 1.9 |10.0 |9.2 | 10.0 |
|General|7.7| 8.1 |9.9 |9.2 | 9.2 |
|Health consultation|7.7| 8.5 |9.9 |9.2 | 9.8 |
|Knowledge-based question|4.2| 8.4 | 9.9 | 9.8 |4.9 |
|Mail assistant|9.5| 9.9 |9.0 |9.9 | 9.5 |
|Morality and Ethics| 4.5 | 9.3 |9.8 |9.7 |7.4 |
|Multi-turn|7.9|8.7 |9.0 |7.8 |4.4 |
|Open question|7.0|9.2 |7.6 |9.6 | 8.2 |
|Proofreading|3.0|4.0 |10.0 |9.0 | 9.1 |
|Summarization|6.2| 7.4 |9.9 |9.8 | 8.4 |
|Translation|7.0|9.0 |8.1 |9.5 | 10.0 |
|**Average**|6.7| 7.9 |9.4 |9.4 | 8.4 |
## Acknowledgement
Thanks for Taiwan LLM's author, Yen-Ting Lin 's kindly advice.
Please review his marvellous works!
[Yen-Ting Lin's hugging face](https://huggingface.co/yentinglin)
## Disclaimer
This model is intended for research purposes only. The author does not guarantee its accuracy, completeness, or suitability for any purpose. Any commercial or other use requires consultation with a legal professional, and the author assumes no responsibility for such use. Users bear all risks associated with the results of using this model. The author is not liable for any direct or indirect losses or damages, including but not limited to loss of profits, business interruption, or data loss. Any use of this model is considered acceptance of the terms of this disclaimer.
### Model Usage
Download model
Here is the example for you to download Hyacinth6B with huggingface transformers:
```python
from transformers import AutoTokenizer,AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("chillymiao/Hyacinth6B")
model = AutoModelForCausalLM.from_pretrained("chillymiao/Hyacinth6B")
```
### Citation
```
@misc{song2024hyacinth6b,
title={Hyacinth6B: A large language model for Traditional Chinese},
author={Chih-Wei Song and Yin-Te Tsai},
year={2024},
eprint={2403.13334},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"TRANSLATION",
"SUMMARIZATION"
] | [
"BEAR"
] |
mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF | mxs980 | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-06-30T20:54:46 | 2024-07-02T01:40:34 | 41 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mxs980/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF | Hoshino-Yumetsuki | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-1.5B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-03-07T09:24:06 | 2025-03-07T09:24:18 | 41 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-1.5B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 83.98507462686567
- type: ap
value: 50.93015252587014
- type: f1
value: 78.50416599051215
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 96.61065
- type: ap
value: 94.89174052954196
- type: f1
value: 96.60942596940565
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.614000000000004
- type: f1
value: 54.90553480294904
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 45.164
- type: map_at_10
value: 61.519
- type: map_at_100
value: 61.769
- type: map_at_1000
value: 61.769
- type: map_at_3
value: 57.443999999999996
- type: map_at_5
value: 60.058
- type: mrr_at_1
value: 46.088
- type: mrr_at_10
value: 61.861
- type: mrr_at_100
value: 62.117999999999995
- type: mrr_at_1000
value: 62.117999999999995
- type: mrr_at_3
value: 57.729
- type: mrr_at_5
value: 60.392
- type: ndcg_at_1
value: 45.164
- type: ndcg_at_10
value: 69.72
- type: ndcg_at_100
value: 70.719
- type: ndcg_at_1000
value: 70.719
- type: ndcg_at_3
value: 61.517999999999994
- type: ndcg_at_5
value: 66.247
- type: precision_at_1
value: 45.164
- type: precision_at_10
value: 9.545
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 24.443
- type: precision_at_5
value: 16.97
- type: recall_at_1
value: 45.164
- type: recall_at_10
value: 95.448
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 73.329
- type: recall_at_5
value: 84.851
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 50.511868162026175
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 45.007803189284004
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 64.55292107723382
- type: mrr
value: 77.66158818097877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 85.65459047085452
- type: cos_sim_spearman
value: 82.10729255710761
- type: euclidean_pearson
value: 82.78079159312476
- type: euclidean_spearman
value: 80.50002701880933
- type: manhattan_pearson
value: 82.41372641383016
- type: manhattan_spearman
value: 80.57412509272639
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.30844155844156
- type: f1
value: 87.25307322443255
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 43.20754608934859
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 38.818037697335505
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 35.423
- type: map_at_10
value: 47.198
- type: map_at_100
value: 48.899
- type: map_at_1000
value: 49.004
- type: map_at_3
value: 43.114999999999995
- type: map_at_5
value: 45.491
- type: mrr_at_1
value: 42.918
- type: mrr_at_10
value: 53.299
- type: mrr_at_100
value: 54.032000000000004
- type: mrr_at_1000
value: 54.055
- type: mrr_at_3
value: 50.453
- type: mrr_at_5
value: 52.205999999999996
- type: ndcg_at_1
value: 42.918
- type: ndcg_at_10
value: 53.98
- type: ndcg_at_100
value: 59.57
- type: ndcg_at_1000
value: 60.879000000000005
- type: ndcg_at_3
value: 48.224000000000004
- type: ndcg_at_5
value: 50.998
- type: precision_at_1
value: 42.918
- type: precision_at_10
value: 10.299999999999999
- type: precision_at_100
value: 1.687
- type: precision_at_1000
value: 0.211
- type: precision_at_3
value: 22.842000000000002
- type: precision_at_5
value: 16.681
- type: recall_at_1
value: 35.423
- type: recall_at_10
value: 66.824
- type: recall_at_100
value: 89.564
- type: recall_at_1000
value: 97.501
- type: recall_at_3
value: 50.365
- type: recall_at_5
value: 57.921
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 33.205
- type: map_at_10
value: 44.859
- type: map_at_100
value: 46.135
- type: map_at_1000
value: 46.259
- type: map_at_3
value: 41.839
- type: map_at_5
value: 43.662
- type: mrr_at_1
value: 41.146
- type: mrr_at_10
value: 50.621
- type: mrr_at_100
value: 51.207
- type: mrr_at_1000
value: 51.246
- type: mrr_at_3
value: 48.535000000000004
- type: mrr_at_5
value: 49.818
- type: ndcg_at_1
value: 41.146
- type: ndcg_at_10
value: 50.683
- type: ndcg_at_100
value: 54.82
- type: ndcg_at_1000
value: 56.69
- type: ndcg_at_3
value: 46.611000000000004
- type: ndcg_at_5
value: 48.66
- type: precision_at_1
value: 41.146
- type: precision_at_10
value: 9.439
- type: precision_at_100
value: 1.465
- type: precision_at_1000
value: 0.194
- type: precision_at_3
value: 22.59
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 33.205
- type: recall_at_10
value: 61.028999999999996
- type: recall_at_100
value: 78.152
- type: recall_at_1000
value: 89.59700000000001
- type: recall_at_3
value: 49.05
- type: recall_at_5
value: 54.836
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 41.637
- type: map_at_10
value: 55.162
- type: map_at_100
value: 56.142
- type: map_at_1000
value: 56.188
- type: map_at_3
value: 51.564
- type: map_at_5
value: 53.696
- type: mrr_at_1
value: 47.524
- type: mrr_at_10
value: 58.243
- type: mrr_at_100
value: 58.879999999999995
- type: mrr_at_1000
value: 58.9
- type: mrr_at_3
value: 55.69499999999999
- type: mrr_at_5
value: 57.284
- type: ndcg_at_1
value: 47.524
- type: ndcg_at_10
value: 61.305
- type: ndcg_at_100
value: 65.077
- type: ndcg_at_1000
value: 65.941
- type: ndcg_at_3
value: 55.422000000000004
- type: ndcg_at_5
value: 58.516
- type: precision_at_1
value: 47.524
- type: precision_at_10
value: 9.918000000000001
- type: precision_at_100
value: 1.276
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.765
- type: precision_at_5
value: 17.204
- type: recall_at_1
value: 41.637
- type: recall_at_10
value: 76.185
- type: recall_at_100
value: 92.149
- type: recall_at_1000
value: 98.199
- type: recall_at_3
value: 60.856
- type: recall_at_5
value: 68.25099999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 26.27
- type: map_at_10
value: 37.463
- type: map_at_100
value: 38.434000000000005
- type: map_at_1000
value: 38.509
- type: map_at_3
value: 34.226
- type: map_at_5
value: 36.161
- type: mrr_at_1
value: 28.588
- type: mrr_at_10
value: 39.383
- type: mrr_at_100
value: 40.23
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 36.422
- type: mrr_at_5
value: 38.252
- type: ndcg_at_1
value: 28.588
- type: ndcg_at_10
value: 43.511
- type: ndcg_at_100
value: 48.274
- type: ndcg_at_1000
value: 49.975
- type: ndcg_at_3
value: 37.319
- type: ndcg_at_5
value: 40.568
- type: precision_at_1
value: 28.588
- type: precision_at_10
value: 6.893000000000001
- type: precision_at_100
value: 0.9900000000000001
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 16.347
- type: precision_at_5
value: 11.661000000000001
- type: recall_at_1
value: 26.27
- type: recall_at_10
value: 60.284000000000006
- type: recall_at_100
value: 81.902
- type: recall_at_1000
value: 94.43
- type: recall_at_3
value: 43.537
- type: recall_at_5
value: 51.475
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 18.168
- type: map_at_10
value: 28.410000000000004
- type: map_at_100
value: 29.78
- type: map_at_1000
value: 29.892999999999997
- type: map_at_3
value: 25.238
- type: map_at_5
value: 26.96
- type: mrr_at_1
value: 23.507
- type: mrr_at_10
value: 33.382
- type: mrr_at_100
value: 34.404
- type: mrr_at_1000
value: 34.467999999999996
- type: mrr_at_3
value: 30.637999999999998
- type: mrr_at_5
value: 32.199
- type: ndcg_at_1
value: 23.507
- type: ndcg_at_10
value: 34.571000000000005
- type: ndcg_at_100
value: 40.663
- type: ndcg_at_1000
value: 43.236000000000004
- type: ndcg_at_3
value: 29.053
- type: ndcg_at_5
value: 31.563999999999997
- type: precision_at_1
value: 23.507
- type: precision_at_10
value: 6.654
- type: precision_at_100
value: 1.113
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 14.427999999999999
- type: precision_at_5
value: 10.498000000000001
- type: recall_at_1
value: 18.168
- type: recall_at_10
value: 48.443000000000005
- type: recall_at_100
value: 74.47
- type: recall_at_1000
value: 92.494
- type: recall_at_3
value: 33.379999999999995
- type: recall_at_5
value: 39.76
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 32.39
- type: map_at_10
value: 44.479
- type: map_at_100
value: 45.977000000000004
- type: map_at_1000
value: 46.087
- type: map_at_3
value: 40.976
- type: map_at_5
value: 43.038
- type: mrr_at_1
value: 40.135
- type: mrr_at_10
value: 50.160000000000004
- type: mrr_at_100
value: 51.052
- type: mrr_at_1000
value: 51.087
- type: mrr_at_3
value: 47.818
- type: mrr_at_5
value: 49.171
- type: ndcg_at_1
value: 40.135
- type: ndcg_at_10
value: 50.731
- type: ndcg_at_100
value: 56.452000000000005
- type: ndcg_at_1000
value: 58.123000000000005
- type: ndcg_at_3
value: 45.507
- type: ndcg_at_5
value: 48.11
- type: precision_at_1
value: 40.135
- type: precision_at_10
value: 9.192
- type: precision_at_100
value: 1.397
- type: precision_at_1000
value: 0.169
- type: precision_at_3
value: 21.816
- type: precision_at_5
value: 15.476
- type: recall_at_1
value: 32.39
- type: recall_at_10
value: 63.597
- type: recall_at_100
value: 86.737
- type: recall_at_1000
value: 97.039
- type: recall_at_3
value: 48.906
- type: recall_at_5
value: 55.659000000000006
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.397
- type: map_at_10
value: 39.871
- type: map_at_100
value: 41.309000000000005
- type: map_at_1000
value: 41.409
- type: map_at_3
value: 36.047000000000004
- type: map_at_5
value: 38.104
- type: mrr_at_1
value: 34.703
- type: mrr_at_10
value: 44.773
- type: mrr_at_100
value: 45.64
- type: mrr_at_1000
value: 45.678999999999995
- type: mrr_at_3
value: 41.705
- type: mrr_at_5
value: 43.406
- type: ndcg_at_1
value: 34.703
- type: ndcg_at_10
value: 46.271
- type: ndcg_at_100
value: 52.037
- type: ndcg_at_1000
value: 53.81700000000001
- type: ndcg_at_3
value: 39.966
- type: ndcg_at_5
value: 42.801
- type: precision_at_1
value: 34.703
- type: precision_at_10
value: 8.744
- type: precision_at_100
value: 1.348
- type: precision_at_1000
value: 0.167
- type: precision_at_3
value: 19.102
- type: precision_at_5
value: 13.836
- type: recall_at_1
value: 28.397
- type: recall_at_10
value: 60.299
- type: recall_at_100
value: 84.595
- type: recall_at_1000
value: 96.155
- type: recall_at_3
value: 43.065
- type: recall_at_5
value: 50.371
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.044333333333338
- type: map_at_10
value: 38.78691666666666
- type: map_at_100
value: 40.113
- type: map_at_1000
value: 40.22125
- type: map_at_3
value: 35.52966666666667
- type: map_at_5
value: 37.372749999999996
- type: mrr_at_1
value: 33.159083333333335
- type: mrr_at_10
value: 42.913583333333335
- type: mrr_at_100
value: 43.7845
- type: mrr_at_1000
value: 43.830333333333336
- type: mrr_at_3
value: 40.29816666666667
- type: mrr_at_5
value: 41.81366666666667
- type: ndcg_at_1
value: 33.159083333333335
- type: ndcg_at_10
value: 44.75750000000001
- type: ndcg_at_100
value: 50.13658333333334
- type: ndcg_at_1000
value: 52.037
- type: ndcg_at_3
value: 39.34258333333334
- type: ndcg_at_5
value: 41.93708333333333
- type: precision_at_1
value: 33.159083333333335
- type: precision_at_10
value: 7.952416666666667
- type: precision_at_100
value: 1.2571666666666668
- type: precision_at_1000
value: 0.16099999999999998
- type: precision_at_3
value: 18.303833333333337
- type: precision_at_5
value: 13.057083333333333
- type: recall_at_1
value: 28.044333333333338
- type: recall_at_10
value: 58.237249999999996
- type: recall_at_100
value: 81.35391666666666
- type: recall_at_1000
value: 94.21283333333334
- type: recall_at_3
value: 43.32341666666667
- type: recall_at_5
value: 49.94908333333333
- type: map_at_1
value: 18.398
- type: map_at_10
value: 27.929
- type: map_at_100
value: 29.032999999999998
- type: map_at_1000
value: 29.126
- type: map_at_3
value: 25.070999999999998
- type: map_at_5
value: 26.583000000000002
- type: mrr_at_1
value: 19.963
- type: mrr_at_10
value: 29.997
- type: mrr_at_100
value: 30.9
- type: mrr_at_1000
value: 30.972
- type: mrr_at_3
value: 27.264
- type: mrr_at_5
value: 28.826
- type: ndcg_at_1
value: 19.963
- type: ndcg_at_10
value: 33.678999999999995
- type: ndcg_at_100
value: 38.931
- type: ndcg_at_1000
value: 41.379
- type: ndcg_at_3
value: 28.000000000000004
- type: ndcg_at_5
value: 30.637999999999998
- type: precision_at_1
value: 19.963
- type: precision_at_10
value: 5.7299999999999995
- type: precision_at_100
value: 0.902
- type: precision_at_1000
value: 0.122
- type: precision_at_3
value: 12.631
- type: precision_at_5
value: 9.057
- type: recall_at_1
value: 18.398
- type: recall_at_10
value: 49.254
- type: recall_at_100
value: 73.182
- type: recall_at_1000
value: 91.637
- type: recall_at_3
value: 34.06
- type: recall_at_5
value: 40.416000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 27.838
- type: map_at_10
value: 36.04
- type: map_at_100
value: 37.113
- type: map_at_1000
value: 37.204
- type: map_at_3
value: 33.585
- type: map_at_5
value: 34.845
- type: mrr_at_1
value: 30.982
- type: mrr_at_10
value: 39.105000000000004
- type: mrr_at_100
value: 39.98
- type: mrr_at_1000
value: 40.042
- type: mrr_at_3
value: 36.912
- type: mrr_at_5
value: 38.062000000000005
- type: ndcg_at_1
value: 30.982
- type: ndcg_at_10
value: 40.982
- type: ndcg_at_100
value: 46.092
- type: ndcg_at_1000
value: 48.25
- type: ndcg_at_3
value: 36.41
- type: ndcg_at_5
value: 38.379999999999995
- type: precision_at_1
value: 30.982
- type: precision_at_10
value: 6.534
- type: precision_at_100
value: 0.9820000000000001
- type: precision_at_1000
value: 0.124
- type: precision_at_3
value: 15.745999999999999
- type: precision_at_5
value: 10.828
- type: recall_at_1
value: 27.838
- type: recall_at_10
value: 52.971000000000004
- type: recall_at_100
value: 76.357
- type: recall_at_1000
value: 91.973
- type: recall_at_3
value: 40.157
- type: recall_at_5
value: 45.147999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 19.059
- type: map_at_10
value: 27.454
- type: map_at_100
value: 28.736
- type: map_at_1000
value: 28.865000000000002
- type: map_at_3
value: 24.773999999999997
- type: map_at_5
value: 26.266000000000002
- type: mrr_at_1
value: 23.125
- type: mrr_at_10
value: 31.267
- type: mrr_at_100
value: 32.32
- type: mrr_at_1000
value: 32.394
- type: mrr_at_3
value: 28.894
- type: mrr_at_5
value: 30.281000000000002
- type: ndcg_at_1
value: 23.125
- type: ndcg_at_10
value: 32.588
- type: ndcg_at_100
value: 38.432
- type: ndcg_at_1000
value: 41.214
- type: ndcg_at_3
value: 27.938000000000002
- type: ndcg_at_5
value: 30.127
- type: precision_at_1
value: 23.125
- type: precision_at_10
value: 5.9639999999999995
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.148
- type: precision_at_3
value: 13.294
- type: precision_at_5
value: 9.628
- type: recall_at_1
value: 19.059
- type: recall_at_10
value: 44.25
- type: recall_at_100
value: 69.948
- type: recall_at_1000
value: 89.35300000000001
- type: recall_at_3
value: 31.114000000000004
- type: recall_at_5
value: 36.846000000000004
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 28.355999999999998
- type: map_at_10
value: 39.055
- type: map_at_100
value: 40.486
- type: map_at_1000
value: 40.571
- type: map_at_3
value: 35.69
- type: map_at_5
value: 37.605
- type: mrr_at_1
value: 33.302
- type: mrr_at_10
value: 42.986000000000004
- type: mrr_at_100
value: 43.957
- type: mrr_at_1000
value: 43.996
- type: mrr_at_3
value: 40.111999999999995
- type: mrr_at_5
value: 41.735
- type: ndcg_at_1
value: 33.302
- type: ndcg_at_10
value: 44.962999999999994
- type: ndcg_at_100
value: 50.917
- type: ndcg_at_1000
value: 52.622
- type: ndcg_at_3
value: 39.182
- type: ndcg_at_5
value: 41.939
- type: precision_at_1
value: 33.302
- type: precision_at_10
value: 7.779999999999999
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 18.035
- type: precision_at_5
value: 12.873000000000001
- type: recall_at_1
value: 28.355999999999998
- type: recall_at_10
value: 58.782000000000004
- type: recall_at_100
value: 84.02199999999999
- type: recall_at_1000
value: 95.511
- type: recall_at_3
value: 43.126999999999995
- type: recall_at_5
value: 50.14999999999999
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.391
- type: map_at_10
value: 37.523
- type: map_at_100
value: 39.312000000000005
- type: map_at_1000
value: 39.54
- type: map_at_3
value: 34.231
- type: map_at_5
value: 36.062
- type: mrr_at_1
value: 32.016
- type: mrr_at_10
value: 41.747
- type: mrr_at_100
value: 42.812
- type: mrr_at_1000
value: 42.844
- type: mrr_at_3
value: 39.129999999999995
- type: mrr_at_5
value: 40.524
- type: ndcg_at_1
value: 32.016
- type: ndcg_at_10
value: 43.826
- type: ndcg_at_100
value: 50.373999999999995
- type: ndcg_at_1000
value: 52.318
- type: ndcg_at_3
value: 38.479
- type: ndcg_at_5
value: 40.944
- type: precision_at_1
value: 32.016
- type: precision_at_10
value: 8.280999999999999
- type: precision_at_100
value: 1.6760000000000002
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 18.05
- type: precision_at_5
value: 13.083
- type: recall_at_1
value: 27.391
- type: recall_at_10
value: 56.928999999999995
- type: recall_at_100
value: 85.169
- type: recall_at_1000
value: 96.665
- type: recall_at_3
value: 42.264
- type: recall_at_5
value: 48.556
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 19.681
- type: map_at_10
value: 32.741
- type: map_at_100
value: 34.811
- type: map_at_1000
value: 35.003
- type: map_at_3
value: 27.697
- type: map_at_5
value: 30.372
- type: mrr_at_1
value: 44.951
- type: mrr_at_10
value: 56.34400000000001
- type: mrr_at_100
value: 56.961
- type: mrr_at_1000
value: 56.987
- type: mrr_at_3
value: 53.681
- type: mrr_at_5
value: 55.407
- type: ndcg_at_1
value: 44.951
- type: ndcg_at_10
value: 42.905
- type: ndcg_at_100
value: 49.95
- type: ndcg_at_1000
value: 52.917
- type: ndcg_at_3
value: 36.815
- type: ndcg_at_5
value: 38.817
- type: precision_at_1
value: 44.951
- type: precision_at_10
value: 12.989999999999998
- type: precision_at_100
value: 2.068
- type: precision_at_1000
value: 0.263
- type: precision_at_3
value: 27.275
- type: precision_at_5
value: 20.365
- type: recall_at_1
value: 19.681
- type: recall_at_10
value: 48.272999999999996
- type: recall_at_100
value: 71.87400000000001
- type: recall_at_1000
value: 87.929
- type: recall_at_3
value: 32.653999999999996
- type: recall_at_5
value: 39.364
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 10.231
- type: map_at_10
value: 22.338
- type: map_at_100
value: 31.927
- type: map_at_1000
value: 33.87
- type: map_at_3
value: 15.559999999999999
- type: map_at_5
value: 18.239
- type: mrr_at_1
value: 75.0
- type: mrr_at_10
value: 81.303
- type: mrr_at_100
value: 81.523
- type: mrr_at_1000
value: 81.53
- type: mrr_at_3
value: 80.083
- type: mrr_at_5
value: 80.758
- type: ndcg_at_1
value: 64.625
- type: ndcg_at_10
value: 48.687000000000005
- type: ndcg_at_100
value: 52.791
- type: ndcg_at_1000
value: 60.041999999999994
- type: ndcg_at_3
value: 53.757999999999996
- type: ndcg_at_5
value: 50.76500000000001
- type: precision_at_1
value: 75.0
- type: precision_at_10
value: 38.3
- type: precision_at_100
value: 12.025
- type: precision_at_1000
value: 2.3970000000000002
- type: precision_at_3
value: 55.417
- type: precision_at_5
value: 47.5
- type: recall_at_1
value: 10.231
- type: recall_at_10
value: 27.697
- type: recall_at_100
value: 57.409
- type: recall_at_1000
value: 80.547
- type: recall_at_3
value: 16.668
- type: recall_at_5
value: 20.552
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 61.365
- type: f1
value: 56.7540827912991
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 83.479
- type: map_at_10
value: 88.898
- type: map_at_100
value: 89.11
- type: map_at_1000
value: 89.12400000000001
- type: map_at_3
value: 88.103
- type: map_at_5
value: 88.629
- type: mrr_at_1
value: 89.934
- type: mrr_at_10
value: 93.91000000000001
- type: mrr_at_100
value: 93.937
- type: mrr_at_1000
value: 93.938
- type: mrr_at_3
value: 93.62700000000001
- type: mrr_at_5
value: 93.84599999999999
- type: ndcg_at_1
value: 89.934
- type: ndcg_at_10
value: 91.574
- type: ndcg_at_100
value: 92.238
- type: ndcg_at_1000
value: 92.45
- type: ndcg_at_3
value: 90.586
- type: ndcg_at_5
value: 91.16300000000001
- type: precision_at_1
value: 89.934
- type: precision_at_10
value: 10.555
- type: precision_at_100
value: 1.1159999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.588
- type: precision_at_5
value: 20.642
- type: recall_at_1
value: 83.479
- type: recall_at_10
value: 94.971
- type: recall_at_100
value: 97.397
- type: recall_at_1000
value: 98.666
- type: recall_at_3
value: 92.24799999999999
- type: recall_at_5
value: 93.797
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 27.16
- type: map_at_10
value: 45.593
- type: map_at_100
value: 47.762
- type: map_at_1000
value: 47.899
- type: map_at_3
value: 39.237
- type: map_at_5
value: 42.970000000000006
- type: mrr_at_1
value: 52.623
- type: mrr_at_10
value: 62.637
- type: mrr_at_100
value: 63.169
- type: mrr_at_1000
value: 63.185
- type: mrr_at_3
value: 59.928000000000004
- type: mrr_at_5
value: 61.702999999999996
- type: ndcg_at_1
value: 52.623
- type: ndcg_at_10
value: 54.701
- type: ndcg_at_100
value: 61.263
- type: ndcg_at_1000
value: 63.134
- type: ndcg_at_3
value: 49.265
- type: ndcg_at_5
value: 51.665000000000006
- type: precision_at_1
value: 52.623
- type: precision_at_10
value: 15.185
- type: precision_at_100
value: 2.202
- type: precision_at_1000
value: 0.254
- type: precision_at_3
value: 32.767
- type: precision_at_5
value: 24.722
- type: recall_at_1
value: 27.16
- type: recall_at_10
value: 63.309000000000005
- type: recall_at_100
value: 86.722
- type: recall_at_1000
value: 97.505
- type: recall_at_3
value: 45.045
- type: recall_at_5
value: 54.02400000000001
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 42.573
- type: map_at_10
value: 59.373
- type: map_at_100
value: 60.292
- type: map_at_1000
value: 60.358999999999995
- type: map_at_3
value: 56.159000000000006
- type: map_at_5
value: 58.123999999999995
- type: mrr_at_1
value: 85.14500000000001
- type: mrr_at_10
value: 89.25999999999999
- type: mrr_at_100
value: 89.373
- type: mrr_at_1000
value: 89.377
- type: mrr_at_3
value: 88.618
- type: mrr_at_5
value: 89.036
- type: ndcg_at_1
value: 85.14500000000001
- type: ndcg_at_10
value: 68.95
- type: ndcg_at_100
value: 71.95
- type: ndcg_at_1000
value: 73.232
- type: ndcg_at_3
value: 64.546
- type: ndcg_at_5
value: 66.945
- type: precision_at_1
value: 85.14500000000001
- type: precision_at_10
value: 13.865
- type: precision_at_100
value: 1.619
- type: precision_at_1000
value: 0.179
- type: precision_at_3
value: 39.703
- type: precision_at_5
value: 25.718000000000004
- type: recall_at_1
value: 42.573
- type: recall_at_10
value: 69.325
- type: recall_at_100
value: 80.932
- type: recall_at_1000
value: 89.446
- type: recall_at_3
value: 59.553999999999995
- type: recall_at_5
value: 64.294
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 95.8336
- type: ap
value: 93.78862962194073
- type: f1
value: 95.83192650728371
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 23.075000000000003
- type: map_at_10
value: 36.102000000000004
- type: map_at_100
value: 37.257
- type: map_at_1000
value: 37.3
- type: map_at_3
value: 32.144
- type: map_at_5
value: 34.359
- type: mrr_at_1
value: 23.711
- type: mrr_at_10
value: 36.671
- type: mrr_at_100
value: 37.763999999999996
- type: mrr_at_1000
value: 37.801
- type: mrr_at_3
value: 32.775
- type: mrr_at_5
value: 34.977000000000004
- type: ndcg_at_1
value: 23.711
- type: ndcg_at_10
value: 43.361
- type: ndcg_at_100
value: 48.839
- type: ndcg_at_1000
value: 49.88
- type: ndcg_at_3
value: 35.269
- type: ndcg_at_5
value: 39.224
- type: precision_at_1
value: 23.711
- type: precision_at_10
value: 6.866999999999999
- type: precision_at_100
value: 0.96
- type: precision_at_1000
value: 0.105
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.083
- type: recall_at_1
value: 23.075000000000003
- type: recall_at_10
value: 65.756
- type: recall_at_100
value: 90.88199999999999
- type: recall_at_1000
value: 98.739
- type: recall_at_3
value: 43.691
- type: recall_at_5
value: 53.15800000000001
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 97.69493844049248
- type: f1
value: 97.55048089616261
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 88.75968992248062
- type: f1
value: 72.26321223399123
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 82.40080699394754
- type: f1
value: 79.62590029057968
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 84.49562878278414
- type: f1
value: 84.0040193313333
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 39.386760057101945
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 37.89687154075537
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.94151656057482
- type: mrr
value: 35.32684700746953
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.239999999999999
- type: map_at_10
value: 14.862
- type: map_at_100
value: 18.955
- type: map_at_1000
value: 20.694000000000003
- type: map_at_3
value: 10.683
- type: map_at_5
value: 12.674
- type: mrr_at_1
value: 50.15500000000001
- type: mrr_at_10
value: 59.697
- type: mrr_at_100
value: 60.095
- type: mrr_at_1000
value: 60.129999999999995
- type: mrr_at_3
value: 58.35900000000001
- type: mrr_at_5
value: 58.839
- type: ndcg_at_1
value: 48.452
- type: ndcg_at_10
value: 39.341
- type: ndcg_at_100
value: 35.866
- type: ndcg_at_1000
value: 45.111000000000004
- type: ndcg_at_3
value: 44.527
- type: ndcg_at_5
value: 42.946
- type: precision_at_1
value: 50.15500000000001
- type: precision_at_10
value: 29.536
- type: precision_at_100
value: 9.142
- type: precision_at_1000
value: 2.2849999999999997
- type: precision_at_3
value: 41.899
- type: precision_at_5
value: 37.647000000000006
- type: recall_at_1
value: 6.239999999999999
- type: recall_at_10
value: 19.278000000000002
- type: recall_at_100
value: 36.074
- type: recall_at_1000
value: 70.017
- type: recall_at_3
value: 12.066
- type: recall_at_5
value: 15.254000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.75
- type: map_at_10
value: 56.443
- type: map_at_100
value: 57.233999999999995
- type: map_at_1000
value: 57.249
- type: map_at_3
value: 52.032999999999994
- type: map_at_5
value: 54.937999999999995
- type: mrr_at_1
value: 44.728
- type: mrr_at_10
value: 58.939
- type: mrr_at_100
value: 59.489000000000004
- type: mrr_at_1000
value: 59.499
- type: mrr_at_3
value: 55.711999999999996
- type: mrr_at_5
value: 57.89
- type: ndcg_at_1
value: 44.728
- type: ndcg_at_10
value: 63.998999999999995
- type: ndcg_at_100
value: 67.077
- type: ndcg_at_1000
value: 67.40899999999999
- type: ndcg_at_3
value: 56.266000000000005
- type: ndcg_at_5
value: 60.88
- type: precision_at_1
value: 44.728
- type: precision_at_10
value: 10.09
- type: precision_at_100
value: 1.1809999999999998
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.145
- type: precision_at_5
value: 17.822
- type: recall_at_1
value: 39.75
- type: recall_at_10
value: 84.234
- type: recall_at_100
value: 97.055
- type: recall_at_1000
value: 99.517
- type: recall_at_3
value: 64.851
- type: recall_at_5
value: 75.343
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.085
- type: map_at_10
value: 86.107
- type: map_at_100
value: 86.727
- type: map_at_1000
value: 86.74
- type: map_at_3
value: 83.21
- type: map_at_5
value: 85.06
- type: mrr_at_1
value: 82.94
- type: mrr_at_10
value: 88.845
- type: mrr_at_100
value: 88.926
- type: mrr_at_1000
value: 88.927
- type: mrr_at_3
value: 87.993
- type: mrr_at_5
value: 88.62299999999999
- type: ndcg_at_1
value: 82.97
- type: ndcg_at_10
value: 89.645
- type: ndcg_at_100
value: 90.717
- type: ndcg_at_1000
value: 90.78
- type: ndcg_at_3
value: 86.99900000000001
- type: ndcg_at_5
value: 88.52600000000001
- type: precision_at_1
value: 82.97
- type: precision_at_10
value: 13.569
- type: precision_at_100
value: 1.539
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.043
- type: precision_at_5
value: 24.992
- type: recall_at_1
value: 72.085
- type: recall_at_10
value: 96.262
- type: recall_at_100
value: 99.77000000000001
- type: recall_at_1000
value: 99.997
- type: recall_at_3
value: 88.652
- type: recall_at_5
value: 93.01899999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 55.82153952668092
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.094465801879295
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.688
- type: map_at_10
value: 15.201999999999998
- type: map_at_100
value: 18.096
- type: map_at_1000
value: 18.481
- type: map_at_3
value: 10.734
- type: map_at_5
value: 12.94
- type: mrr_at_1
value: 28.000000000000004
- type: mrr_at_10
value: 41.101
- type: mrr_at_100
value: 42.202
- type: mrr_at_1000
value: 42.228
- type: mrr_at_3
value: 37.683
- type: mrr_at_5
value: 39.708
- type: ndcg_at_1
value: 28.000000000000004
- type: ndcg_at_10
value: 24.976000000000003
- type: ndcg_at_100
value: 35.129
- type: ndcg_at_1000
value: 40.77
- type: ndcg_at_3
value: 23.787
- type: ndcg_at_5
value: 20.816000000000003
- type: precision_at_1
value: 28.000000000000004
- type: precision_at_10
value: 13.04
- type: precision_at_100
value: 2.761
- type: precision_at_1000
value: 0.41000000000000003
- type: precision_at_3
value: 22.6
- type: precision_at_5
value: 18.52
- type: recall_at_1
value: 5.688
- type: recall_at_10
value: 26.43
- type: recall_at_100
value: 56.02
- type: recall_at_1000
value: 83.21
- type: recall_at_3
value: 13.752
- type: recall_at_5
value: 18.777
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 85.15084859283178
- type: cos_sim_spearman
value: 80.49030614009419
- type: euclidean_pearson
value: 81.84574978672468
- type: euclidean_spearman
value: 79.89787150656818
- type: manhattan_pearson
value: 81.63076538567131
- type: manhattan_spearman
value: 79.69867352121841
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.64097921490992
- type: cos_sim_spearman
value: 77.25370084896514
- type: euclidean_pearson
value: 82.71210826468788
- type: euclidean_spearman
value: 78.50445584994826
- type: manhattan_pearson
value: 82.92580164330298
- type: manhattan_spearman
value: 78.69686891301019
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 87.24596417308994
- type: cos_sim_spearman
value: 87.79454220555091
- type: euclidean_pearson
value: 87.40242561671164
- type: euclidean_spearman
value: 88.25955597373556
- type: manhattan_pearson
value: 87.25160240485849
- type: manhattan_spearman
value: 88.155794979818
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 84.44914233422564
- type: cos_sim_spearman
value: 82.91015471820322
- type: euclidean_pearson
value: 84.7206656630327
- type: euclidean_spearman
value: 83.86408872059216
- type: manhattan_pearson
value: 84.72816725158454
- type: manhattan_spearman
value: 84.01603388572788
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 87.6168026237477
- type: cos_sim_spearman
value: 88.45414278092397
- type: euclidean_pearson
value: 88.57023240882022
- type: euclidean_spearman
value: 89.04102190922094
- type: manhattan_pearson
value: 88.66695535796354
- type: manhattan_spearman
value: 89.19898476680969
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.27925826089424
- type: cos_sim_spearman
value: 85.45291099550461
- type: euclidean_pearson
value: 83.63853036580834
- type: euclidean_spearman
value: 84.33468035821484
- type: manhattan_pearson
value: 83.72778773251596
- type: manhattan_spearman
value: 84.51583132445376
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 89.67375185692552
- type: cos_sim_spearman
value: 90.32542469203855
- type: euclidean_pearson
value: 89.63513717951847
- type: euclidean_spearman
value: 89.87760271003745
- type: manhattan_pearson
value: 89.28381452982924
- type: manhattan_spearman
value: 89.53568197785721
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.24644693819846
- type: cos_sim_spearman
value: 66.09889420525377
- type: euclidean_pearson
value: 63.72551583520747
- type: euclidean_spearman
value: 63.01385470780679
- type: manhattan_pearson
value: 64.09258157214097
- type: manhattan_spearman
value: 63.080517752822594
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 86.27321463839989
- type: cos_sim_spearman
value: 86.37572865993327
- type: euclidean_pearson
value: 86.36268020198149
- type: euclidean_spearman
value: 86.31089339478922
- type: manhattan_pearson
value: 86.4260445761947
- type: manhattan_spearman
value: 86.45885895320457
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 86.52456702387798
- type: mrr
value: 96.34556529164372
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.99400000000001
- type: map_at_10
value: 73.38799999999999
- type: map_at_100
value: 73.747
- type: map_at_1000
value: 73.75
- type: map_at_3
value: 70.04599999999999
- type: map_at_5
value: 72.095
- type: mrr_at_1
value: 65.0
- type: mrr_at_10
value: 74.42800000000001
- type: mrr_at_100
value: 74.722
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.056
- type: mrr_at_5
value: 73.60600000000001
- type: ndcg_at_1
value: 65.0
- type: ndcg_at_10
value: 78.435
- type: ndcg_at_100
value: 79.922
- type: ndcg_at_1000
value: 80.00500000000001
- type: ndcg_at_3
value: 73.05199999999999
- type: ndcg_at_5
value: 75.98
- type: precision_at_1
value: 65.0
- type: precision_at_10
value: 10.5
- type: precision_at_100
value: 1.123
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 28.555999999999997
- type: precision_at_5
value: 19.0
- type: recall_at_1
value: 61.99400000000001
- type: recall_at_10
value: 92.72200000000001
- type: recall_at_100
value: 99.333
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 78.739
- type: recall_at_5
value: 85.828
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.79009900990098
- type: cos_sim_ap
value: 95.3203137438653
- type: cos_sim_f1
value: 89.12386706948641
- type: cos_sim_precision
value: 89.75659229208925
- type: cos_sim_recall
value: 88.5
- type: dot_accuracy
value: 99.67821782178218
- type: dot_ap
value: 89.94069840000675
- type: dot_f1
value: 83.45902463549521
- type: dot_precision
value: 83.9231547017189
- type: dot_recall
value: 83.0
- type: euclidean_accuracy
value: 99.78613861386138
- type: euclidean_ap
value: 95.10648259135526
- type: euclidean_f1
value: 88.77338877338877
- type: euclidean_precision
value: 92.42424242424242
- type: euclidean_recall
value: 85.39999999999999
- type: manhattan_accuracy
value: 99.7950495049505
- type: manhattan_ap
value: 95.29987661320946
- type: manhattan_f1
value: 89.21313183949972
- type: manhattan_precision
value: 93.14472252448314
- type: manhattan_recall
value: 85.6
- type: max_accuracy
value: 99.7950495049505
- type: max_ap
value: 95.3203137438653
- type: max_f1
value: 89.21313183949972
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 67.65446577183913
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 46.30749237193961
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.91481849959949
- type: mrr
value: 55.853506175197346
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.08196549170419
- type: cos_sim_spearman
value: 31.16661390597077
- type: dot_pearson
value: 29.892258410943466
- type: dot_spearman
value: 30.51328811965085
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.23900000000000002
- type: map_at_10
value: 2.173
- type: map_at_100
value: 14.24
- type: map_at_1000
value: 35.309000000000005
- type: map_at_3
value: 0.7100000000000001
- type: map_at_5
value: 1.163
- type: mrr_at_1
value: 92.0
- type: mrr_at_10
value: 96.0
- type: mrr_at_100
value: 96.0
- type: mrr_at_1000
value: 96.0
- type: mrr_at_3
value: 96.0
- type: mrr_at_5
value: 96.0
- type: ndcg_at_1
value: 90.0
- type: ndcg_at_10
value: 85.382
- type: ndcg_at_100
value: 68.03
- type: ndcg_at_1000
value: 61.021
- type: ndcg_at_3
value: 89.765
- type: ndcg_at_5
value: 88.444
- type: precision_at_1
value: 92.0
- type: precision_at_10
value: 88.0
- type: precision_at_100
value: 70.02000000000001
- type: precision_at_1000
value: 26.984
- type: precision_at_3
value: 94.0
- type: precision_at_5
value: 92.80000000000001
- type: recall_at_1
value: 0.23900000000000002
- type: recall_at_10
value: 2.313
- type: recall_at_100
value: 17.049
- type: recall_at_1000
value: 57.489999999999995
- type: recall_at_3
value: 0.737
- type: recall_at_5
value: 1.221
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.75
- type: map_at_10
value: 11.29
- type: map_at_100
value: 18.032999999999998
- type: map_at_1000
value: 19.746
- type: map_at_3
value: 6.555
- type: map_at_5
value: 8.706999999999999
- type: mrr_at_1
value: 34.694
- type: mrr_at_10
value: 50.55
- type: mrr_at_100
value: 51.659
- type: mrr_at_1000
value: 51.659
- type: mrr_at_3
value: 47.278999999999996
- type: mrr_at_5
value: 49.728
- type: ndcg_at_1
value: 32.653
- type: ndcg_at_10
value: 27.894000000000002
- type: ndcg_at_100
value: 39.769
- type: ndcg_at_1000
value: 51.495999999999995
- type: ndcg_at_3
value: 32.954
- type: ndcg_at_5
value: 31.502999999999997
- type: precision_at_1
value: 34.694
- type: precision_at_10
value: 23.265
- type: precision_at_100
value: 7.898
- type: precision_at_1000
value: 1.58
- type: precision_at_3
value: 34.694
- type: precision_at_5
value: 31.429000000000002
- type: recall_at_1
value: 2.75
- type: recall_at_10
value: 16.953
- type: recall_at_100
value: 48.68
- type: recall_at_1000
value: 85.18599999999999
- type: recall_at_3
value: 7.710999999999999
- type: recall_at_5
value: 11.484
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 82.66099999999999
- type: ap
value: 25.555698090238337
- type: f1
value: 66.48402012461622
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.94567062818335
- type: f1
value: 73.28139189595674
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.581627240203474
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.78089050485785
- type: cos_sim_ap
value: 79.64487116574168
- type: cos_sim_f1
value: 72.46563021970964
- type: cos_sim_precision
value: 70.62359128474831
- type: cos_sim_recall
value: 74.40633245382587
- type: dot_accuracy
value: 86.2609524944865
- type: dot_ap
value: 75.513046857613
- type: dot_f1
value: 68.58213616489695
- type: dot_precision
value: 65.12455516014235
- type: dot_recall
value: 72.42744063324538
- type: euclidean_accuracy
value: 87.6080348095607
- type: euclidean_ap
value: 79.00204933649795
- type: euclidean_f1
value: 72.14495342605589
- type: euclidean_precision
value: 69.85421299728193
- type: euclidean_recall
value: 74.5910290237467
- type: manhattan_accuracy
value: 87.59611372712642
- type: manhattan_ap
value: 78.78523756706264
- type: manhattan_f1
value: 71.86499137718648
- type: manhattan_precision
value: 67.39833641404806
- type: manhattan_recall
value: 76.96569920844327
- type: max_accuracy
value: 87.78089050485785
- type: max_ap
value: 79.64487116574168
- type: max_f1
value: 72.46563021970964
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.98719292117825
- type: cos_sim_ap
value: 87.58146137353202
- type: cos_sim_f1
value: 80.28543232369239
- type: cos_sim_precision
value: 79.1735289714029
- type: cos_sim_recall
value: 81.42901139513397
- type: dot_accuracy
value: 88.9199363526992
- type: dot_ap
value: 84.98499998630417
- type: dot_f1
value: 78.21951400757969
- type: dot_precision
value: 75.58523624874336
- type: dot_recall
value: 81.04404065291038
- type: euclidean_accuracy
value: 89.77374160748244
- type: euclidean_ap
value: 87.35151562835209
- type: euclidean_f1
value: 79.92160922940393
- type: euclidean_precision
value: 76.88531587933979
- type: euclidean_recall
value: 83.20757622420696
- type: manhattan_accuracy
value: 89.72717041176699
- type: manhattan_ap
value: 87.34065592142515
- type: manhattan_f1
value: 79.85603419187943
- type: manhattan_precision
value: 77.82243332115455
- type: manhattan_recall
value: 81.99876809362489
- type: max_accuracy
value: 89.98719292117825
- type: max_ap
value: 87.58146137353202
- type: max_f1
value: 80.28543232369239
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 53.45954203592337
- type: cos_sim_spearman
value: 58.42154680418638
- type: euclidean_pearson
value: 56.41543791722753
- type: euclidean_spearman
value: 58.39328016640146
- type: manhattan_pearson
value: 56.318510356833876
- type: manhattan_spearman
value: 58.28423447818184
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 50.78356460675945
- type: cos_sim_spearman
value: 55.6530411663269
- type: euclidean_pearson
value: 56.50763660417816
- type: euclidean_spearman
value: 55.733823335669065
- type: manhattan_pearson
value: 56.45323093512866
- type: manhattan_spearman
value: 55.63248619032702
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.209999999999994
- type: f1
value: 46.08892432018655
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 70.25573992001478
- type: cos_sim_spearman
value: 73.85247134951433
- type: euclidean_pearson
value: 72.60033082168442
- type: euclidean_spearman
value: 73.72445893756499
- type: manhattan_pearson
value: 72.59932284620231
- type: manhattan_spearman
value: 73.68002490614583
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 45.21317724305628
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 42.49825170976724
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.15661686810597
- type: mrr
value: 90.11222222222223
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.1204726064383
- type: mrr
value: 90.20142857142858
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 27.224999999999998
- type: map_at_10
value: 40.169
- type: map_at_100
value: 42.0
- type: map_at_1000
value: 42.109
- type: map_at_3
value: 35.76
- type: map_at_5
value: 38.221
- type: mrr_at_1
value: 40.56
- type: mrr_at_10
value: 49.118
- type: mrr_at_100
value: 50.092999999999996
- type: mrr_at_1000
value: 50.133
- type: mrr_at_3
value: 46.507
- type: mrr_at_5
value: 47.973
- type: ndcg_at_1
value: 40.56
- type: ndcg_at_10
value: 46.972
- type: ndcg_at_100
value: 54.04
- type: ndcg_at_1000
value: 55.862
- type: ndcg_at_3
value: 41.36
- type: ndcg_at_5
value: 43.704
- type: precision_at_1
value: 40.56
- type: precision_at_10
value: 10.302999999999999
- type: precision_at_100
value: 1.606
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 23.064
- type: precision_at_5
value: 16.764000000000003
- type: recall_at_1
value: 27.224999999999998
- type: recall_at_10
value: 58.05200000000001
- type: recall_at_100
value: 87.092
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 41.373
- type: recall_at_5
value: 48.453
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 77.40228502705953
- type: cos_sim_ap
value: 86.22359172956327
- type: cos_sim_f1
value: 78.96328293736501
- type: cos_sim_precision
value: 73.36945615091311
- type: cos_sim_recall
value: 85.48047696983868
- type: dot_accuracy
value: 75.53818400481059
- type: dot_ap
value: 83.70164011305312
- type: dot_f1
value: 77.67298719348754
- type: dot_precision
value: 67.49482401656314
- type: dot_recall
value: 91.46598082768296
- type: euclidean_accuracy
value: 77.94347564642213
- type: euclidean_ap
value: 86.4652108728609
- type: euclidean_f1
value: 79.15555555555555
- type: euclidean_precision
value: 75.41816641964853
- type: euclidean_recall
value: 83.28267477203647
- type: manhattan_accuracy
value: 77.45039085989175
- type: manhattan_ap
value: 86.09986583900665
- type: manhattan_f1
value: 78.93669264438988
- type: manhattan_precision
value: 72.63261296660117
- type: manhattan_recall
value: 86.43909282207154
- type: max_accuracy
value: 77.94347564642213
- type: max_ap
value: 86.4652108728609
- type: max_f1
value: 79.15555555555555
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 69.336
- type: map_at_10
value: 77.16
- type: map_at_100
value: 77.47500000000001
- type: map_at_1000
value: 77.482
- type: map_at_3
value: 75.42999999999999
- type: map_at_5
value: 76.468
- type: mrr_at_1
value: 69.44200000000001
- type: mrr_at_10
value: 77.132
- type: mrr_at_100
value: 77.43299999999999
- type: mrr_at_1000
value: 77.44
- type: mrr_at_3
value: 75.395
- type: mrr_at_5
value: 76.459
- type: ndcg_at_1
value: 69.547
- type: ndcg_at_10
value: 80.794
- type: ndcg_at_100
value: 82.245
- type: ndcg_at_1000
value: 82.40899999999999
- type: ndcg_at_3
value: 77.303
- type: ndcg_at_5
value: 79.168
- type: precision_at_1
value: 69.547
- type: precision_at_10
value: 9.305
- type: precision_at_100
value: 0.9979999999999999
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 27.749000000000002
- type: precision_at_5
value: 17.576
- type: recall_at_1
value: 69.336
- type: recall_at_10
value: 92.097
- type: recall_at_100
value: 98.736
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 82.64
- type: recall_at_5
value: 87.144
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.817999999999998
- type: map_at_10
value: 82.67
- type: map_at_100
value: 85.304
- type: map_at_1000
value: 85.334
- type: map_at_3
value: 57.336
- type: map_at_5
value: 72.474
- type: mrr_at_1
value: 91.45
- type: mrr_at_10
value: 94.272
- type: mrr_at_100
value: 94.318
- type: mrr_at_1000
value: 94.32000000000001
- type: mrr_at_3
value: 94.0
- type: mrr_at_5
value: 94.17699999999999
- type: ndcg_at_1
value: 91.45
- type: ndcg_at_10
value: 89.404
- type: ndcg_at_100
value: 91.724
- type: ndcg_at_1000
value: 91.973
- type: ndcg_at_3
value: 88.104
- type: ndcg_at_5
value: 87.25699999999999
- type: precision_at_1
value: 91.45
- type: precision_at_10
value: 42.585
- type: precision_at_100
value: 4.838
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 78.8
- type: precision_at_5
value: 66.66
- type: recall_at_1
value: 26.817999999999998
- type: recall_at_10
value: 90.67
- type: recall_at_100
value: 98.36200000000001
- type: recall_at_1000
value: 99.583
- type: recall_at_3
value: 59.614999999999995
- type: recall_at_5
value: 77.05199999999999
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 47.699999999999996
- type: map_at_10
value: 57.589999999999996
- type: map_at_100
value: 58.226
- type: map_at_1000
value: 58.251
- type: map_at_3
value: 55.233
- type: map_at_5
value: 56.633
- type: mrr_at_1
value: 47.699999999999996
- type: mrr_at_10
value: 57.589999999999996
- type: mrr_at_100
value: 58.226
- type: mrr_at_1000
value: 58.251
- type: mrr_at_3
value: 55.233
- type: mrr_at_5
value: 56.633
- type: ndcg_at_1
value: 47.699999999999996
- type: ndcg_at_10
value: 62.505
- type: ndcg_at_100
value: 65.517
- type: ndcg_at_1000
value: 66.19800000000001
- type: ndcg_at_3
value: 57.643
- type: ndcg_at_5
value: 60.181
- type: precision_at_1
value: 47.699999999999996
- type: precision_at_10
value: 7.8
- type: precision_at_100
value: 0.919
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 21.532999999999998
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 47.699999999999996
- type: recall_at_10
value: 78.0
- type: recall_at_100
value: 91.9
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 64.60000000000001
- type: recall_at_5
value: 70.8
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 44.84801846864178
- type: f1
value: 37.47347897956339
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 85.81613508442777
- type: ap
value: 52.68244615477374
- type: f1
value: 80.0445640948843
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.57786502217138
- type: cos_sim_spearman
value: 75.39106054489906
- type: euclidean_pearson
value: 73.72082954602402
- type: euclidean_spearman
value: 75.14421475913619
- type: manhattan_pearson
value: 73.62463076633642
- type: manhattan_spearman
value: 75.01301565104112
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 29.143797057999134
- type: mrr
value: 28.08174603174603
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 70.492
- type: map_at_10
value: 79.501
- type: map_at_100
value: 79.728
- type: map_at_1000
value: 79.735
- type: map_at_3
value: 77.77
- type: map_at_5
value: 78.851
- type: mrr_at_1
value: 72.822
- type: mrr_at_10
value: 80.001
- type: mrr_at_100
value: 80.19
- type: mrr_at_1000
value: 80.197
- type: mrr_at_3
value: 78.484
- type: mrr_at_5
value: 79.42099999999999
- type: ndcg_at_1
value: 72.822
- type: ndcg_at_10
value: 83.013
- type: ndcg_at_100
value: 84.013
- type: ndcg_at_1000
value: 84.20400000000001
- type: ndcg_at_3
value: 79.728
- type: ndcg_at_5
value: 81.542
- type: precision_at_1
value: 72.822
- type: precision_at_10
value: 9.917
- type: precision_at_100
value: 1.042
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 29.847
- type: precision_at_5
value: 18.871
- type: recall_at_1
value: 70.492
- type: recall_at_10
value: 93.325
- type: recall_at_100
value: 97.822
- type: recall_at_1000
value: 99.319
- type: recall_at_3
value: 84.636
- type: recall_at_5
value: 88.93100000000001
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.88298587760592
- type: f1
value: 73.89001762017176
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.76328177538669
- type: f1
value: 80.24718532423358
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 49.6
- type: map_at_10
value: 55.620999999999995
- type: map_at_100
value: 56.204
- type: map_at_1000
value: 56.251
- type: map_at_3
value: 54.132999999999996
- type: map_at_5
value: 54.933
- type: mrr_at_1
value: 49.7
- type: mrr_at_10
value: 55.67100000000001
- type: mrr_at_100
value: 56.254000000000005
- type: mrr_at_1000
value: 56.301
- type: mrr_at_3
value: 54.18300000000001
- type: mrr_at_5
value: 54.983000000000004
- type: ndcg_at_1
value: 49.6
- type: ndcg_at_10
value: 58.645
- type: ndcg_at_100
value: 61.789
- type: ndcg_at_1000
value: 63.219
- type: ndcg_at_3
value: 55.567
- type: ndcg_at_5
value: 57.008
- type: precision_at_1
value: 49.6
- type: precision_at_10
value: 6.819999999999999
- type: precision_at_100
value: 0.836
- type: precision_at_1000
value: 0.095
- type: precision_at_3
value: 19.900000000000002
- type: precision_at_5
value: 12.64
- type: recall_at_1
value: 49.6
- type: recall_at_10
value: 68.2
- type: recall_at_100
value: 83.6
- type: recall_at_1000
value: 95.3
- type: recall_at_3
value: 59.699999999999996
- type: recall_at_5
value: 63.2
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 74.45666666666666
- type: f1
value: 74.32582402190089
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 80.67135896047645
- type: cos_sim_ap
value: 87.60421240712051
- type: cos_sim_f1
value: 82.1304131408661
- type: cos_sim_precision
value: 77.68361581920904
- type: cos_sim_recall
value: 87.11721224920802
- type: dot_accuracy
value: 79.04710341093666
- type: dot_ap
value: 85.6370059719336
- type: dot_f1
value: 80.763723150358
- type: dot_precision
value: 73.69337979094077
- type: dot_recall
value: 89.33474128827878
- type: euclidean_accuracy
value: 81.05035192203573
- type: euclidean_ap
value: 87.7880240053663
- type: euclidean_f1
value: 82.50244379276637
- type: euclidean_precision
value: 76.7970882620564
- type: euclidean_recall
value: 89.1235480464625
- type: manhattan_accuracy
value: 80.61721710882512
- type: manhattan_ap
value: 87.43568120591175
- type: manhattan_f1
value: 81.89526184538653
- type: manhattan_precision
value: 77.5992438563327
- type: manhattan_recall
value: 86.6948257655755
- type: max_accuracy
value: 81.05035192203573
- type: max_ap
value: 87.7880240053663
- type: max_f1
value: 82.50244379276637
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 93.5
- type: ap
value: 91.31357903446782
- type: f1
value: 93.48088994006616
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 36.93293453538077
- type: cos_sim_spearman
value: 42.45972506308574
- type: euclidean_pearson
value: 42.34945133152159
- type: euclidean_spearman
value: 42.331610303674644
- type: manhattan_pearson
value: 42.31455070249498
- type: manhattan_spearman
value: 42.19887982891834
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 33.683290790043785
- type: cos_sim_spearman
value: 35.149171171202994
- type: euclidean_pearson
value: 32.33806561267862
- type: euclidean_spearman
value: 34.483576387347966
- type: manhattan_pearson
value: 32.47629754599608
- type: manhattan_spearman
value: 34.66434471867615
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 66.46322760516104
- type: cos_sim_spearman
value: 67.398478319726
- type: euclidean_pearson
value: 64.7223480293625
- type: euclidean_spearman
value: 66.83118568812951
- type: manhattan_pearson
value: 64.88440039828305
- type: manhattan_spearman
value: 66.80429458952257
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 79.08991383232105
- type: cos_sim_spearman
value: 79.39715677296854
- type: euclidean_pearson
value: 78.63201279320496
- type: euclidean_spearman
value: 79.40262660785731
- type: manhattan_pearson
value: 78.98138363146906
- type: manhattan_spearman
value: 79.79968413014194
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.43289278789972
- type: mrr
value: 77.53012460908535
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 27.733999999999998
- type: map_at_10
value: 78.24799999999999
- type: map_at_100
value: 81.765
- type: map_at_1000
value: 81.824
- type: map_at_3
value: 54.92
- type: map_at_5
value: 67.61399999999999
- type: mrr_at_1
value: 90.527
- type: mrr_at_10
value: 92.843
- type: mrr_at_100
value: 92.927
- type: mrr_at_1000
value: 92.93
- type: mrr_at_3
value: 92.45100000000001
- type: mrr_at_5
value: 92.693
- type: ndcg_at_1
value: 90.527
- type: ndcg_at_10
value: 85.466
- type: ndcg_at_100
value: 88.846
- type: ndcg_at_1000
value: 89.415
- type: ndcg_at_3
value: 86.768
- type: ndcg_at_5
value: 85.46000000000001
- type: precision_at_1
value: 90.527
- type: precision_at_10
value: 42.488
- type: precision_at_100
value: 5.024
- type: precision_at_1000
value: 0.516
- type: precision_at_3
value: 75.907
- type: precision_at_5
value: 63.727000000000004
- type: recall_at_1
value: 27.733999999999998
- type: recall_at_10
value: 84.346
- type: recall_at_100
value: 95.536
- type: recall_at_1000
value: 98.42999999999999
- type: recall_at_3
value: 56.455
- type: recall_at_5
value: 70.755
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 49.952000000000005
- type: f1
value: 48.264617195258054
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 68.23769904483508
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 62.50294403136556
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 54.0
- type: map_at_10
value: 63.668
- type: map_at_100
value: 64.217
- type: map_at_1000
value: 64.23100000000001
- type: map_at_3
value: 61.7
- type: map_at_5
value: 62.870000000000005
- type: mrr_at_1
value: 54.0
- type: mrr_at_10
value: 63.668
- type: mrr_at_100
value: 64.217
- type: mrr_at_1000
value: 64.23100000000001
- type: mrr_at_3
value: 61.7
- type: mrr_at_5
value: 62.870000000000005
- type: ndcg_at_1
value: 54.0
- type: ndcg_at_10
value: 68.11399999999999
- type: ndcg_at_100
value: 70.723
- type: ndcg_at_1000
value: 71.123
- type: ndcg_at_3
value: 64.074
- type: ndcg_at_5
value: 66.178
- type: precision_at_1
value: 54.0
- type: precision_at_10
value: 8.200000000000001
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.097
- type: precision_at_3
value: 23.633000000000003
- type: precision_at_5
value: 15.2
- type: recall_at_1
value: 54.0
- type: recall_at_10
value: 82.0
- type: recall_at_100
value: 94.1
- type: recall_at_1000
value: 97.3
- type: recall_at_3
value: 70.89999999999999
- type: recall_at_5
value: 76.0
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 86.63000000000001
- type: ap
value: 69.99457882599567
- type: f1
value: 85.07735617998541
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 44.594104491193555
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 63.97614314115309
- type: f1
value: 52.15634261679283
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 32.646
- type: map_at_10
value: 47.963
- type: map_at_100
value: 48.789
- type: map_at_1000
value: 48.797000000000004
- type: map_at_3
value: 43.196
- type: map_at_5
value: 46.016
- type: mrr_at_1
value: 33.073
- type: mrr_at_10
value: 48.126000000000005
- type: mrr_at_100
value: 48.946
- type: mrr_at_1000
value: 48.953
- type: mrr_at_3
value: 43.374
- type: mrr_at_5
value: 46.147
- type: ndcg_at_1
value: 32.646
- type: ndcg_at_10
value: 56.481
- type: ndcg_at_100
value: 59.922
- type: ndcg_at_1000
value: 60.07
- type: ndcg_at_3
value: 46.675
- type: ndcg_at_5
value: 51.76500000000001
- type: precision_at_1
value: 32.646
- type: precision_at_10
value: 8.371
- type: precision_at_100
value: 0.9860000000000001
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.919
- type: precision_at_5
value: 13.825999999999999
- type: recall_at_1
value: 32.646
- type: recall_at_10
value: 83.71300000000001
- type: recall_at_100
value: 98.578
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 56.757000000000005
- type: recall_at_5
value: 69.132
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.56
- type: ap
value: 23.310493680488513
- type: f1
value: 58.85369533105693
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 88.5
- type: cos_sim_ap
value: 72.42140924378361
- type: cos_sim_f1
value: 66.0919540229885
- type: cos_sim_precision
value: 72.78481012658227
- type: cos_sim_recall
value: 60.526315789473685
- type: dot_accuracy
value: 88.5
- type: dot_ap
value: 72.42140924378361
- type: dot_f1
value: 66.0919540229885
- type: dot_precision
value: 72.78481012658227
- type: dot_recall
value: 60.526315789473685
- type: euclidean_accuracy
value: 88.5
- type: euclidean_ap
value: 72.42140924378361
- type: euclidean_f1
value: 66.0919540229885
- type: euclidean_precision
value: 72.78481012658227
- type: euclidean_recall
value: 60.526315789473685
- type: manhattan_accuracy
value: 88.5
- type: manhattan_ap
value: 72.49745515311696
- type: manhattan_f1
value: 66.0968660968661
- type: manhattan_precision
value: 72.04968944099379
- type: manhattan_recall
value: 61.05263157894737
- type: max_accuracy
value: 88.5
- type: max_ap
value: 72.49745515311696
- type: max_f1
value: 66.0968660968661
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 90.32269765590145
- type: cos_sim_spearman
value: 89.73666311491672
- type: euclidean_pearson
value: 88.2933868516544
- type: euclidean_spearman
value: 89.73666311491672
- type: manhattan_pearson
value: 88.33474590219448
- type: manhattan_spearman
value: 89.8548364866583
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 7.632999999999999
- type: map_at_10
value: 16.426
- type: map_at_100
value: 22.651
- type: map_at_1000
value: 24.372
- type: map_at_3
value: 11.706
- type: map_at_5
value: 13.529
- type: mrr_at_1
value: 60.75000000000001
- type: mrr_at_10
value: 68.613
- type: mrr_at_100
value: 69.001
- type: mrr_at_1000
value: 69.021
- type: mrr_at_3
value: 67.0
- type: mrr_at_5
value: 67.925
- type: ndcg_at_1
value: 49.875
- type: ndcg_at_10
value: 36.978
- type: ndcg_at_100
value: 40.031
- type: ndcg_at_1000
value: 47.566
- type: ndcg_at_3
value: 41.148
- type: ndcg_at_5
value: 38.702
- type: precision_at_1
value: 60.75000000000001
- type: precision_at_10
value: 29.7
- type: precision_at_100
value: 9.278
- type: precision_at_1000
value: 2.099
- type: precision_at_3
value: 44.0
- type: precision_at_5
value: 37.6
- type: recall_at_1
value: 7.632999999999999
- type: recall_at_10
value: 22.040000000000003
- type: recall_at_100
value: 44.024
- type: recall_at_1000
value: 67.848
- type: recall_at_3
value: 13.093
- type: recall_at_5
value: 15.973
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 15.473
- type: map_at_10
value: 24.579
- type: map_at_100
value: 26.387
- type: map_at_1000
value: 26.57
- type: map_at_3
value: 21.278
- type: map_at_5
value: 23.179
- type: mrr_at_1
value: 30.709999999999997
- type: mrr_at_10
value: 38.994
- type: mrr_at_100
value: 39.993
- type: mrr_at_1000
value: 40.044999999999995
- type: mrr_at_3
value: 36.342999999999996
- type: mrr_at_5
value: 37.846999999999994
- type: ndcg_at_1
value: 30.709999999999997
- type: ndcg_at_10
value: 31.608999999999998
- type: ndcg_at_100
value: 38.807
- type: ndcg_at_1000
value: 42.208
- type: ndcg_at_3
value: 28.086
- type: ndcg_at_5
value: 29.323
- type: precision_at_1
value: 30.709999999999997
- type: precision_at_10
value: 8.688
- type: precision_at_100
value: 1.608
- type: precision_at_1000
value: 0.22100000000000003
- type: precision_at_3
value: 18.724
- type: precision_at_5
value: 13.950999999999999
- type: recall_at_1
value: 15.473
- type: recall_at_10
value: 38.361000000000004
- type: recall_at_100
value: 65.2
- type: recall_at_1000
value: 85.789
- type: recall_at_3
value: 25.401
- type: recall_at_5
value: 30.875999999999998
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 38.096000000000004
- type: map_at_10
value: 51.44499999999999
- type: map_at_100
value: 52.325
- type: map_at_1000
value: 52.397000000000006
- type: map_at_3
value: 48.626999999999995
- type: map_at_5
value: 50.342
- type: mrr_at_1
value: 76.19200000000001
- type: mrr_at_10
value: 81.191
- type: mrr_at_100
value: 81.431
- type: mrr_at_1000
value: 81.443
- type: mrr_at_3
value: 80.30199999999999
- type: mrr_at_5
value: 80.85900000000001
- type: ndcg_at_1
value: 76.19200000000001
- type: ndcg_at_10
value: 60.9
- type: ndcg_at_100
value: 64.14699999999999
- type: ndcg_at_1000
value: 65.647
- type: ndcg_at_3
value: 56.818000000000005
- type: ndcg_at_5
value: 59.019999999999996
- type: precision_at_1
value: 76.19200000000001
- type: precision_at_10
value: 12.203
- type: precision_at_100
value: 1.478
- type: precision_at_1000
value: 0.168
- type: precision_at_3
value: 34.616
- type: precision_at_5
value: 22.515
- type: recall_at_1
value: 38.096000000000004
- type: recall_at_10
value: 61.013
- type: recall_at_100
value: 73.90299999999999
- type: recall_at_1000
value: 83.91
- type: recall_at_3
value: 51.92400000000001
- type: recall_at_5
value: 56.286
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 1.548
- type: map_at_10
value: 11.049000000000001
- type: map_at_100
value: 28.874
- type: map_at_1000
value: 34.931
- type: map_at_3
value: 4.162
- type: map_at_5
value: 6.396
- type: mrr_at_1
value: 90.69800000000001
- type: mrr_at_10
value: 92.093
- type: mrr_at_100
value: 92.345
- type: mrr_at_1000
value: 92.345
- type: mrr_at_3
value: 91.86
- type: mrr_at_5
value: 91.86
- type: ndcg_at_1
value: 74.031
- type: ndcg_at_10
value: 63.978
- type: ndcg_at_100
value: 53.101
- type: ndcg_at_1000
value: 60.675999999999995
- type: ndcg_at_3
value: 71.421
- type: ndcg_at_5
value: 68.098
- type: precision_at_1
value: 90.69800000000001
- type: precision_at_10
value: 71.86
- type: precision_at_100
value: 31.395
- type: precision_at_1000
value: 5.981
- type: precision_at_3
value: 84.49600000000001
- type: precision_at_5
value: 79.07
- type: recall_at_1
value: 1.548
- type: recall_at_10
value: 12.149000000000001
- type: recall_at_100
value: 40.794999999999995
- type: recall_at_1000
value: 67.974
- type: recall_at_3
value: 4.244
- type: recall_at_5
value: 6.608
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.55413584398119
- type: f1
value: 69.65610882318181
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.37188971082716
- type: f1
value: 75.64847309941361
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.919
- type: map_at_10
value: 10.834000000000001
- type: map_at_100
value: 13.38
- type: map_at_1000
value: 14.581
- type: map_at_3
value: 8.198
- type: map_at_5
value: 9.428
- type: mrr_at_1
value: 41.176
- type: mrr_at_10
value: 50.083
- type: mrr_at_100
value: 50.559
- type: mrr_at_1000
value: 50.604000000000006
- type: mrr_at_3
value: 47.936
- type: mrr_at_5
value: 49.407000000000004
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 30.098000000000003
- type: ndcg_at_100
value: 27.061
- type: ndcg_at_1000
value: 35.94
- type: ndcg_at_3
value: 35.135
- type: ndcg_at_5
value: 33.335
- type: precision_at_1
value: 41.176
- type: precision_at_10
value: 22.259999999999998
- type: precision_at_100
value: 6.712
- type: precision_at_1000
value: 1.9060000000000001
- type: precision_at_3
value: 33.23
- type: precision_at_5
value: 29.04
- type: recall_at_1
value: 4.919
- type: recall_at_10
value: 14.196
- type: recall_at_100
value: 26.948
- type: recall_at_1000
value: 59.211000000000006
- type: recall_at_3
value: 9.44
- type: recall_at_5
value: 11.569
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 25.35
- type: map_at_10
value: 37.884
- type: map_at_100
value: 38.955
- type: map_at_1000
value: 39.007999999999996
- type: map_at_3
value: 34.239999999999995
- type: map_at_5
value: 36.398
- type: mrr_at_1
value: 28.737000000000002
- type: mrr_at_10
value: 39.973
- type: mrr_at_100
value: 40.844
- type: mrr_at_1000
value: 40.885
- type: mrr_at_3
value: 36.901
- type: mrr_at_5
value: 38.721
- type: ndcg_at_1
value: 28.708
- type: ndcg_at_10
value: 44.204
- type: ndcg_at_100
value: 48.978
- type: ndcg_at_1000
value: 50.33
- type: ndcg_at_3
value: 37.36
- type: ndcg_at_5
value: 40.912
- type: precision_at_1
value: 28.708
- type: precision_at_10
value: 7.367
- type: precision_at_100
value: 1.0030000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 17.034
- type: precision_at_5
value: 12.293999999999999
- type: recall_at_1
value: 25.35
- type: recall_at_10
value: 61.411
- type: recall_at_100
value: 82.599
- type: recall_at_1000
value: 92.903
- type: recall_at_3
value: 43.728
- type: recall_at_5
value: 51.854
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.49422763833996
- type: f1
value: 66.73472657783407
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 81.0
- type: cos_sim_ap
value: 91.47194213011349
- type: cos_sim_f1
value: 84.73767885532592
- type: cos_sim_precision
value: 81.49847094801224
- type: cos_sim_recall
value: 88.24503311258279
- type: dot_accuracy
value: 81.0
- type: dot_ap
value: 91.47194213011349
- type: dot_f1
value: 84.73767885532592
- type: dot_precision
value: 81.49847094801224
- type: dot_recall
value: 88.24503311258279
- type: euclidean_accuracy
value: 81.0
- type: euclidean_ap
value: 91.47194213011349
- type: euclidean_f1
value: 84.73767885532592
- type: euclidean_precision
value: 81.49847094801224
- type: euclidean_recall
value: 88.24503311258279
- type: manhattan_accuracy
value: 81.0
- type: manhattan_ap
value: 91.46464475050571
- type: manhattan_f1
value: 84.48687350835321
- type: manhattan_precision
value: 81.31699846860643
- type: manhattan_recall
value: 87.91390728476821
- type: max_accuracy
value: 81.0
- type: max_ap
value: 91.47194213011349
- type: max_f1
value: 84.73767885532592
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.6808905380334
- type: cos_sim_ap
value: 99.27948611836348
- type: cos_sim_f1
value: 96.15975422427034
- type: cos_sim_precision
value: 96.90402476780186
- type: cos_sim_recall
value: 95.42682926829268
- type: dot_accuracy
value: 97.6808905380334
- type: dot_ap
value: 99.2794861183635
- type: dot_f1
value: 96.15975422427034
- type: dot_precision
value: 96.90402476780186
- type: dot_recall
value: 95.42682926829268
- type: euclidean_accuracy
value: 97.6808905380334
- type: euclidean_ap
value: 99.2794861183635
- type: euclidean_f1
value: 96.15975422427034
- type: euclidean_precision
value: 96.90402476780186
- type: euclidean_recall
value: 95.42682926829268
- type: manhattan_accuracy
value: 97.6808905380334
- type: manhattan_ap
value: 99.28715055268721
- type: manhattan_f1
value: 96.14791987673343
- type: manhattan_precision
value: 97.19626168224299
- type: manhattan_recall
value: 95.1219512195122
- type: max_accuracy
value: 97.6808905380334
- type: max_ap
value: 99.28715055268721
- type: max_f1
value: 96.15975422427034
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 86.16343490304708
- type: f1
value: 83.3442579486744
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 68.40080971659918
- type: f1
value: 53.13720751142237
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 63.322
- type: map_at_10
value: 76.847
- type: map_at_100
value: 77.616
- type: map_at_1000
value: 77.644
- type: map_at_3
value: 73.624
- type: map_at_5
value: 75.603
- type: mrr_at_1
value: 72.88
- type: mrr_at_10
value: 80.376
- type: mrr_at_100
value: 80.604
- type: mrr_at_1000
value: 80.61
- type: mrr_at_3
value: 78.92
- type: mrr_at_5
value: 79.869
- type: ndcg_at_1
value: 72.89999999999999
- type: ndcg_at_10
value: 81.43
- type: ndcg_at_100
value: 83.394
- type: ndcg_at_1000
value: 83.685
- type: ndcg_at_3
value: 77.62599999999999
- type: ndcg_at_5
value: 79.656
- type: precision_at_1
value: 72.89999999999999
- type: precision_at_10
value: 12.548
- type: precision_at_100
value: 1.4869999999999999
- type: precision_at_1000
value: 0.155
- type: precision_at_3
value: 34.027
- type: precision_at_5
value: 22.654
- type: recall_at_1
value: 63.322
- type: recall_at_10
value: 90.664
- type: recall_at_100
value: 97.974
- type: recall_at_1000
value: 99.636
- type: recall_at_3
value: 80.067
- type: recall_at_5
value: 85.526
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.95
- type: map_at_10
value: 9.658999999999999
- type: map_at_100
value: 11.384
- type: map_at_1000
value: 11.677
- type: map_at_3
value: 7.055
- type: map_at_5
value: 8.244
- type: mrr_at_1
value: 19.5
- type: mrr_at_10
value: 28.777
- type: mrr_at_100
value: 29.936
- type: mrr_at_1000
value: 30.009999999999998
- type: mrr_at_3
value: 25.55
- type: mrr_at_5
value: 27.284999999999997
- type: ndcg_at_1
value: 19.5
- type: ndcg_at_10
value: 16.589000000000002
- type: ndcg_at_100
value: 23.879
- type: ndcg_at_1000
value: 29.279
- type: ndcg_at_3
value: 15.719
- type: ndcg_at_5
value: 13.572000000000001
- type: precision_at_1
value: 19.5
- type: precision_at_10
value: 8.62
- type: precision_at_100
value: 1.924
- type: precision_at_1000
value: 0.322
- type: precision_at_3
value: 14.6
- type: precision_at_5
value: 11.78
- type: recall_at_1
value: 3.95
- type: recall_at_10
value: 17.477999999999998
- type: recall_at_100
value: 38.99
- type: recall_at_1000
value: 65.417
- type: recall_at_3
value: 8.883000000000001
- type: recall_at_5
value: 11.933
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 83.48960456583775
- type: cos_sim_ap
value: 76.31522115825375
- type: cos_sim_f1
value: 70.35573122529645
- type: cos_sim_precision
value: 70.9934735315446
- type: cos_sim_recall
value: 69.72934472934473
- type: dot_accuracy
value: 83.48960456583775
- type: dot_ap
value: 76.31522115825373
- type: dot_f1
value: 70.35573122529645
- type: dot_precision
value: 70.9934735315446
- type: dot_recall
value: 69.72934472934473
- type: euclidean_accuracy
value: 83.48960456583775
- type: euclidean_ap
value: 76.31522115825373
- type: euclidean_f1
value: 70.35573122529645
- type: euclidean_precision
value: 70.9934735315446
- type: euclidean_recall
value: 69.72934472934473
- type: manhattan_accuracy
value: 83.46922136159804
- type: manhattan_ap
value: 76.18474601388084
- type: manhattan_f1
value: 70.34779490856937
- type: manhattan_precision
value: 70.83032490974729
- type: manhattan_recall
value: 69.87179487179486
- type: max_accuracy
value: 83.48960456583775
- type: max_ap
value: 76.31522115825375
- type: max_f1
value: 70.35573122529645
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 77.95374883876302
- type: cos_sim_spearman
value: 73.77630219171942
- type: euclidean_pearson
value: 75.81927069594934
- type: euclidean_spearman
value: 73.7763211303831
- type: manhattan_pearson
value: 76.03126859057528
- type: manhattan_spearman
value: 73.96528138013369
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.388282764841826
- type: cos_sim_spearman
value: 40.83477184710897
- type: euclidean_pearson
value: 26.754737044177805
- type: euclidean_spearman
value: 40.83477184710897
- type: manhattan_pearson
value: 26.760453110872458
- type: manhattan_spearman
value: 41.034477441383856
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 49.15
- type: map_at_10
value: 61.690999999999995
- type: map_at_100
value: 62.348000000000006
- type: map_at_1000
value: 62.38
- type: map_at_3
value: 58.824
- type: map_at_5
value: 60.662000000000006
- type: mrr_at_1
value: 51.333
- type: mrr_at_10
value: 62.731
- type: mrr_at_100
value: 63.245
- type: mrr_at_1000
value: 63.275000000000006
- type: mrr_at_3
value: 60.667
- type: mrr_at_5
value: 61.93300000000001
- type: ndcg_at_1
value: 51.333
- type: ndcg_at_10
value: 67.168
- type: ndcg_at_100
value: 69.833
- type: ndcg_at_1000
value: 70.56700000000001
- type: ndcg_at_3
value: 62.40599999999999
- type: ndcg_at_5
value: 65.029
- type: precision_at_1
value: 51.333
- type: precision_at_10
value: 9.333
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.333
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 49.15
- type: recall_at_10
value: 82.533
- type: recall_at_100
value: 94.167
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 69.917
- type: recall_at_5
value: 76.356
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.261
- type: map_at_10
value: 2.1260000000000003
- type: map_at_100
value: 12.171999999999999
- type: map_at_1000
value: 26.884999999999998
- type: map_at_3
value: 0.695
- type: map_at_5
value: 1.134
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 96.952
- type: mrr_at_100
value: 96.952
- type: mrr_at_1000
value: 96.952
- type: mrr_at_3
value: 96.667
- type: mrr_at_5
value: 96.667
- type: ndcg_at_1
value: 92.0
- type: ndcg_at_10
value: 81.193
- type: ndcg_at_100
value: 61.129
- type: ndcg_at_1000
value: 51.157
- type: ndcg_at_3
value: 85.693
- type: ndcg_at_5
value: 84.129
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 85.39999999999999
- type: precision_at_100
value: 62.03999999999999
- type: precision_at_1000
value: 22.224
- type: precision_at_3
value: 88.0
- type: precision_at_5
value: 88.0
- type: recall_at_1
value: 0.261
- type: recall_at_10
value: 2.262
- type: recall_at_100
value: 14.981
- type: recall_at_1000
value: 46.837
- type: recall_at_3
value: 0.703
- type: recall_at_5
value: 1.172
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 70.55290063940157
- type: v_measure
value: 55.41500719337263
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.48697375332002
- type: mrr
value: 75.01836585523822
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.454
- type: map_at_10
value: 51.605000000000004
- type: map_at_100
value: 52.653000000000006
- type: map_at_1000
value: 52.697
- type: map_at_3
value: 48.304
- type: map_at_5
value: 50.073
- type: mrr_at_1
value: 43.307
- type: mrr_at_10
value: 54.400000000000006
- type: mrr_at_100
value: 55.147999999999996
- type: mrr_at_1000
value: 55.174
- type: mrr_at_3
value: 51.77
- type: mrr_at_5
value: 53.166999999999994
- type: ndcg_at_1
value: 43.307
- type: ndcg_at_10
value: 57.891000000000005
- type: ndcg_at_100
value: 62.161
- type: ndcg_at_1000
value: 63.083
- type: ndcg_at_3
value: 51.851
- type: ndcg_at_5
value: 54.605000000000004
- type: precision_at_1
value: 43.307
- type: precision_at_10
value: 9.033
- type: precision_at_100
value: 1.172
- type: precision_at_1000
value: 0.127
- type: precision_at_3
value: 22.798
- type: precision_at_5
value: 15.492
- type: recall_at_1
value: 38.454
- type: recall_at_10
value: 74.166
- type: recall_at_100
value: 92.43599999999999
- type: recall_at_1000
value: 99.071
- type: recall_at_3
value: 58.087
- type: recall_at_5
value: 64.568
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.474
- type: f1
value: 50.38275392350236
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 2.252
- type: map_at_10
value: 4.661
- type: map_at_100
value: 5.271
- type: map_at_1000
value: 5.3629999999999995
- type: map_at_3
value: 3.604
- type: map_at_5
value: 4.3020000000000005
- type: mrr_at_1
value: 2.252
- type: mrr_at_10
value: 4.661
- type: mrr_at_100
value: 5.271
- type: mrr_at_1000
value: 5.3629999999999995
- type: mrr_at_3
value: 3.604
- type: mrr_at_5
value: 4.3020000000000005
- type: ndcg_at_1
value: 2.252
- type: ndcg_at_10
value: 6.3020000000000005
- type: ndcg_at_100
value: 10.342
- type: ndcg_at_1000
value: 13.475999999999999
- type: ndcg_at_3
value: 4.0649999999999995
- type: ndcg_at_5
value: 5.344
- type: precision_at_1
value: 2.252
- type: precision_at_10
value: 1.171
- type: precision_at_100
value: 0.333
- type: precision_at_1000
value: 0.059000000000000004
- type: precision_at_3
value: 1.802
- type: precision_at_5
value: 1.712
- type: recall_at_1
value: 2.252
- type: recall_at_10
value: 11.712
- type: recall_at_100
value: 33.333
- type: recall_at_1000
value: 59.458999999999996
- type: recall_at_3
value: 5.405
- type: recall_at_5
value: 8.559
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 28.301882091023288
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 45.26992995191701
- type: v_measure
value: 42.773174876871145
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.47635452552458
- type: f1
value: 93.19922617577213
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 80.2317569683683
- type: f1
value: 56.18060418621901
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 85.18957345971565
- type: f1
value: 80.829981537394
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 71.04138999801822
- type: v_measure
value: 71.7056263158008
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.65097511768661
- type: f1
value: 73.82441070598712
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 79.09885675857431
- type: f1
value: 78.28407777434224
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 25.307000000000002
- type: map_at_10
value: 36.723
- type: map_at_100
value: 37.713
- type: map_at_1000
value: 37.769000000000005
- type: map_at_3
value: 33.77
- type: map_at_5
value: 35.463
- type: mrr_at_1
value: 25.307000000000002
- type: mrr_at_10
value: 36.723
- type: mrr_at_100
value: 37.713
- type: mrr_at_1000
value: 37.769000000000005
- type: mrr_at_3
value: 33.77
- type: mrr_at_5
value: 35.463
- type: ndcg_at_1
value: 25.307000000000002
- type: ndcg_at_10
value: 42.559999999999995
- type: ndcg_at_100
value: 47.457
- type: ndcg_at_1000
value: 49.162
- type: ndcg_at_3
value: 36.461
- type: ndcg_at_5
value: 39.504
- type: precision_at_1
value: 25.307000000000002
- type: precision_at_10
value: 6.106
- type: precision_at_100
value: 0.8420000000000001
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 14.741999999999999
- type: precision_at_5
value: 10.319
- type: recall_at_1
value: 25.307000000000002
- type: recall_at_10
value: 61.056999999999995
- type: recall_at_100
value: 84.152
- type: recall_at_1000
value: 98.03399999999999
- type: recall_at_3
value: 44.226
- type: recall_at_5
value: 51.597
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 70.8
- type: cos_sim_ap
value: 73.7671529695957
- type: cos_sim_f1
value: 68.80964339527875
- type: cos_sim_precision
value: 62.95955882352941
- type: cos_sim_recall
value: 75.85825027685493
- type: dot_accuracy
value: 70.8
- type: dot_ap
value: 73.78345265366947
- type: dot_f1
value: 68.80964339527875
- type: dot_precision
value: 62.95955882352941
- type: dot_recall
value: 75.85825027685493
- type: euclidean_accuracy
value: 70.8
- type: euclidean_ap
value: 73.7671529695957
- type: euclidean_f1
value: 68.80964339527875
- type: euclidean_precision
value: 62.95955882352941
- type: euclidean_recall
value: 75.85825027685493
- type: manhattan_accuracy
value: 70.75
- type: manhattan_ap
value: 73.78996383615953
- type: manhattan_f1
value: 68.79432624113475
- type: manhattan_precision
value: 63.39869281045751
- type: manhattan_recall
value: 75.1937984496124
- type: max_accuracy
value: 70.8
- type: max_ap
value: 73.78996383615953
- type: max_f1
value: 68.80964339527875
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 84.03253762760392
- type: cos_sim_spearman
value: 79.68280105762004
- type: euclidean_pearson
value: 80.98265050044444
- type: euclidean_spearman
value: 79.68233242682867
- type: manhattan_pearson
value: 80.9678911810704
- type: manhattan_spearman
value: 79.70264097683109
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 80.56896987572884
- type: cos_sim_spearman
value: 81.84352499523287
- type: euclidean_pearson
value: 80.40831759421305
- type: euclidean_spearman
value: 81.84352499523287
- type: manhattan_pearson
value: 80.74333857561238
- type: manhattan_spearman
value: 82.41503246733892
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 82.71826762276979
- type: cos_sim_spearman
value: 82.25433354916042
- type: euclidean_pearson
value: 81.87115571724316
- type: euclidean_spearman
value: 82.25322342890107
- type: manhattan_pearson
value: 82.11174867527224
- type: manhattan_spearman
value: 82.55905365203084
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 30.659441623392887
- type: cos_sim_spearman
value: 30.501134097353315
- type: dot_pearson
value: 30.659444768851056
- type: dot_spearman
value: 30.501134097353315
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 94.03333333333333
- type: mrr
value: 94.03333333333333
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 79.0
- type: map_at_10
value: 87.61
- type: map_at_100
value: 87.655
- type: map_at_1000
value: 87.655
- type: map_at_3
value: 87.167
- type: map_at_5
value: 87.36699999999999
- type: mrr_at_1
value: 79.0
- type: mrr_at_10
value: 87.61
- type: mrr_at_100
value: 87.655
- type: mrr_at_1000
value: 87.655
- type: mrr_at_3
value: 87.167
- type: mrr_at_5
value: 87.36699999999999
- type: ndcg_at_1
value: 79.0
- type: ndcg_at_10
value: 90.473
- type: ndcg_at_100
value: 90.694
- type: ndcg_at_1000
value: 90.694
- type: ndcg_at_3
value: 89.464
- type: ndcg_at_5
value: 89.851
- type: precision_at_1
value: 79.0
- type: precision_at_10
value: 9.9
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 32.0
- type: precision_at_5
value: 19.400000000000002
- type: recall_at_1
value: 79.0
- type: recall_at_10
value: 99.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 96.0
- type: recall_at_5
value: 97.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 39.395
- type: map_at_10
value: 59.123999999999995
- type: map_at_100
value: 60.704
- type: map_at_1000
value: 60.760000000000005
- type: map_at_3
value: 53.187
- type: map_at_5
value: 56.863
- type: mrr_at_1
value: 62.083
- type: mrr_at_10
value: 68.87299999999999
- type: mrr_at_100
value: 69.46900000000001
- type: mrr_at_1000
value: 69.48299999999999
- type: mrr_at_3
value: 66.8
- type: mrr_at_5
value: 67.928
- type: ndcg_at_1
value: 62.083
- type: ndcg_at_10
value: 65.583
- type: ndcg_at_100
value: 70.918
- type: ndcg_at_1000
value: 71.72800000000001
- type: ndcg_at_3
value: 60.428000000000004
- type: ndcg_at_5
value: 61.853
- type: precision_at_1
value: 62.083
- type: precision_at_10
value: 15.033
- type: precision_at_100
value: 1.9529999999999998
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 36.315
- type: precision_at_5
value: 25.955000000000002
- type: recall_at_1
value: 39.395
- type: recall_at_10
value: 74.332
- type: recall_at_100
value: 94.729
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 57.679
- type: recall_at_5
value: 65.036
---
# Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-1.5B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-1.5B-instruct-Q8_0-GGUF --hf-file gte-qwen2-1.5b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Fizzarolli/pythia-2.8b-anneal-base | Fizzarolli | null | [
"safetensors",
"gpt_neox",
"pytorch",
"causal-lm",
"pythia",
"en",
"dataset:EleutherAI/pile",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"license:apache-2.0",
"region:us"
] | 2025-02-27T02:51:08 | 2025-02-27T03:01:08 | 40 | 0 | ---
datasets:
- EleutherAI/pile
language:
- en
license: apache-2.0
tags:
- pytorch
- causal-lm
- pythia
---
The *Pythia Scaling Suite* is a collection of models developed to facilitate
interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf).
It contains two sets of eight models of sizes
70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two
models: one trained on the Pile, and one trained on the Pile after the dataset
has been globally deduplicated. All 8 model sizes are trained on the exact
same data, in the exact same order. We also provide 154 intermediate
checkpoints per model, hosted on Hugging Face as branches.
The Pythia model suite was deliberately designed to promote scientific
research on large language models, especially interpretability research.
Despite not centering downstream performance as a design goal, we find the
models <a href="#evaluations">match or exceed</a> the performance of
similar and same-sized models, such as those in the OPT and GPT-Neo suites.
<details>
<summary style="font-weight:600">Details on previous early release and naming convention.</summary>
Previously, we released an early version of the Pythia suite to the public.
However, we decided to retrain the model suite to address a few hyperparameter
discrepancies. This model card <a href="#changelog">lists the changes</a>;
see appendix B in the Pythia paper for further discussion. We found no
difference in benchmark performance between the two Pythia versions.
The old models are
[still available](https://huggingface.co/models?other=pythia_v0), but we
suggest the retrained suite if you are just starting to use Pythia.<br>
**This is the current release.**
Please note that all models in the *Pythia* suite were renamed in January
2023. For clarity, a <a href="#naming-convention-and-parameter-count">table
comparing the old and new names</a> is provided in this model card, together
with exact parameter counts.
</details>
<br>
# Pythia-2.8B
## Model Details
- Developed by: [EleutherAI](http://eleuther.ai)
- Model type: Transformer-based Language Model
- Language: English
- Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia)
for training procedure, config files, and details on how to use.
[See paper](https://arxiv.org/pdf/2304.01373.pdf) for more evals and implementation
details.
- Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
- License: Apache 2.0
- Contact: to ask questions about this model, join the [EleutherAI
Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`.
Please read the existing *Pythia* documentation before asking about it in the
EleutherAI Discord. For general correspondence: [contact@eleuther.
ai](mailto:[email protected]).
<figure>
| Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models |
| -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: |
| 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — |
| 160M | 85,056,000 | 12 | 768 | 12 | 2M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M |
| 410M | 302,311,424 | 24 | 1024 | 16 | 2M | 3.0 x 10<sup>-4</sup> | OPT-350M |
| 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — |
| 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 2M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B |
| 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B |
| 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B |
| 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — |
<figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and
non-deduped models of a given size have the same hyperparameters. “Equivalent”
models have <b>exactly</b> the same architecture, and the same number of
non-embedding parameters.</figcaption>
</figure>
## Uses and Limitations
### Intended Use
The primary intended use of Pythia is research on the behavior, functionality,
and limitations of large language models. This suite is intended to provide
a controlled setting for performing scientific experiments. We also provide
154 checkpoints per model: initial `step0`, 10 log-spaced checkpoints
`step{1,2,4...512}`, and 143 evenly-spaced checkpoints from `step1000` to
`step143000`. These checkpoints are hosted on Hugging Face as branches. Note
that branch `143000` corresponds exactly to the model checkpoint on the `main`
branch of each model.
You may also further fine-tune and adapt Pythia-2.8B for deployment,
as long as your use is in accordance with the Apache 2.0 license. Pythia
models work with the Hugging Face [Transformers
Library](https://huggingface.co/docs/transformers/index). If you decide to use
pre-trained Pythia-2.8B as a basis for your fine-tuned model, please
conduct your own risk and bias assessment.
### Out-of-scope use
The Pythia Suite is **not** intended for deployment. It is not a in itself
a product and cannot be used for human-facing interactions. For example,
the model may generate harmful or offensive text. Please evaluate the risks
associated with your particular use case.
Pythia models are English-language only, and are not suitable for translation
or generating text in other languages.
Pythia-2.8B has not been fine-tuned for downstream contexts in which
language models are commonly deployed, such as writing genre prose,
or commercial chatbots. This means Pythia-2.8B will **not**
respond to a given prompt the way a product like ChatGPT does. This is because,
unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement
Learning from Human Feedback (RLHF) to better “follow” human instructions.
### Limitations and biases
The core functionality of a large language model is to take a string of text
and predict the next token. The token used by the model need not produce the
most “accurate” text. Never rely on Pythia-2.8B to produce factually accurate
output.
This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset
known to contain profanity and texts that are lewd or otherwise offensive.
See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a
discussion of documented biases with regards to gender, religion, and race.
Pythia-2.8B may produce socially unacceptable or undesirable text, *even if*
the prompt itself does not include anything explicitly offensive.
If you plan on using text generated through, for example, the Hosted Inference
API, we recommend having a human curate the outputs of this language model
before presenting it to other people. Please inform your audience that the
text was generated by Pythia-2.8B.
### Quickstart
Pythia models can be loaded and used via the following code, demonstrated here
for the third `pythia-70m-deduped` checkpoint:
```python
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
tokenizer = AutoTokenizer.from_pretrained(
"EleutherAI/pythia-70m-deduped",
revision="step3000",
cache_dir="./pythia-70m-deduped/step3000",
)
inputs = tokenizer("Hello, I am", return_tensors="pt")
tokens = model.generate(**inputs)
tokenizer.decode(tokens[0])
```
Revision/branch `step143000` corresponds exactly to the model checkpoint on
the `main` branch of each model.<br>
For more information on how to use all Pythia models, see [documentation on
GitHub](https://github.com/EleutherAI/pythia).
## Training
### Training data
[The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in
English. It was created by EleutherAI specifically for training large language
models. It contains texts from 22 diverse sources, roughly broken down into
five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl),
prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and
miscellaneous (e.g. GitHub, Enron Emails). See [the Pile
paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources,
methodology, and a discussion of ethical implications. Consult [the
datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation
about the Pile and its component datasets. The Pile can be downloaded from
the [official website](https://pile.eleuther.ai/), or from a [community
mirror](https://the-eye.eu/public/AI/pile/).<br>
The Pile was **not** deduplicated before being used to train Pythia-2.8B.
### Training procedure
All models were trained on the exact same data, in the exact same order. Each
model saw 299,892,736,000 tokens during training, and 143 checkpoints for each
model are saved every 2,097,152,000 tokens, spaced evenly throughout training,
from `step1000` to `step143000` (which is the same as `main`). In addition, we
also provide frequent early checkpoints: `step0` and `step{1,2,4...512}`.
This corresponds to training for just under 1 epoch on the Pile for
non-deduplicated models, and about 1.5 epochs on the deduplicated Pile.
All *Pythia* models trained for 143000 steps at a batch size
of 2M (2,097,152 tokens).<br>
See [GitHub](https://github.com/EleutherAI/pythia) for more details on training
procedure, including [how to reproduce
it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br>
Pythia uses the same tokenizer as [GPT-NeoX-
20B](https://huggingface.co/EleutherAI/gpt-neox-20b).
## Evaluations
All 16 *Pythia* models were evaluated using the [LM Evaluation
Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access
the results by model and step at `results/json/*` in the [GitHub
repository](https://github.com/EleutherAI/pythia/tree/main/results/json/).<br>
Expand the sections below to see plots of evaluation results for all
Pythia and Pythia-deduped models compared with OPT and BLOOM.
<details>
<summary>LAMBADA – OpenAI</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai_v1.png" style="width:auto"/>
</details>
<details>
<summary>Physical Interaction: Question Answering (PIQA)</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa_v1.png" style="width:auto"/>
</details>
<details>
<summary>WinoGrande</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande_v1.png" style="width:auto"/>
</details>
<details>
<summary>AI2 Reasoning Challenge—Easy Set</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_easy_v1.png" style="width:auto"/>
</details>
<details>
<summary>SciQ</summary>
<img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq_v1.png" style="width:auto"/>
</details>
## Changelog
This section compares differences between previously released
[Pythia v0](https://huggingface.co/models?other=pythia_v0) and the current
models. See Appendix B of the Pythia paper for further discussion of these
changes and the motivation behind them. We found that retraining Pythia had no
impact on benchmark performance.
- All model sizes are now trained with uniform batch size of 2M tokens.
Previously, the models of size 160M, 410M, and 1.4B parameters were trained
with batch sizes of 4M tokens.
- We added checkpoints at initialization (step 0) and steps {1,2,4,8,16,32,64,
128,256,512} in addition to every 1000 training steps.
- Flash Attention was used in the new retrained suite.
- We remedied a minor inconsistency that existed in the original suite: all
models of size 2.8B parameters or smaller had a learning rate (LR) schedule
which decayed to a minimum LR of 10% the starting LR rate, but the 6.9B and
12B models all used an LR schedule which decayed to a minimum LR of 0. In
the redone training runs, we rectified this inconsistency: all models now were
trained with LR decaying to a minimum of 0.1× their maximum LR.
### Naming convention and parameter count
*Pythia* models were renamed in January 2023. It is possible that the old
naming convention still persists in some documentation by accident. The
current naming convention (70M, 160M, etc.) is based on total parameter count.
<figure style="width:32em">
| current Pythia suffix | old suffix | total params | non-embedding params |
| --------------------: | ---------: | -------------: | -------------------: |
| 70M | 19M | 70,426,624 | 18,915,328 |
| 160M | 125M | 162,322,944 | 85,056,000 |
| 410M | 350M | 405,334,016 | 302,311,424 |
| 1B | 800M | 1,011,781,632 | 805,736,448 |
| 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 |
| 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 |
| 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 |
| 12B | 13B | 11,846,072,320 | 11,327,027,200 |
</figure> | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"SCIQ"
] |
Omartificial-Intelligence-Space/Marbert-all-nli-triplet-Matryoshka | Omartificial-Intelligence-Space | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"mteb",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:557850",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"ar",
"dataset:Omartificial-Intelligence-Space/Arabic-NLi-Triplet",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"arxiv:2407.21139",
"base_model:UBC-NLP/MARBERTv2",
"base_model:finetune:UBC-NLP/MARBERTv2",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-17T11:23:10 | 2025-01-10T18:14:19 | 39 | 1 | ---
base_model: UBC-NLP/MARBERTv2
datasets:
- Omartificial-Intelligence-Space/Arabic-NLi-Triplet
language:
- ar
library_name: sentence-transformers
license: apache-2.0
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- mteb
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:557850
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: ذكر متوازن بعناية يقف على قدم واحدة بالقرب من منطقة شاطئ المحيط
النظيفة
sentences:
- رجل يقدم عرضاً
- هناك رجل بالخارج قرب الشاطئ
- رجل يجلس على أريكه
- source_sentence: رجل يقفز إلى سريره القذر
sentences:
- السرير قذر.
- رجل يضحك أثناء غسيل الملابس
- الرجل على القمر
- source_sentence: الفتيات بالخارج
sentences:
- امرأة تلف الخيط إلى كرات بجانب كومة من الكرات
- فتيان يركبان في جولة متعة
- ثلاث فتيات يقفون سوية في غرفة واحدة تستمع وواحدة تكتب على الحائط والثالثة تتحدث
إليهن
- source_sentence: الرجل يرتدي قميصاً أزرق.
sentences:
- رجل يرتدي قميصاً أزرق يميل إلى الجدار بجانب الطريق مع شاحنة زرقاء وسيارة حمراء
مع الماء في الخلفية.
- كتاب القصص مفتوح
- رجل يرتدي قميص أسود يعزف على الجيتار.
- source_sentence: يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة
شابة.
sentences:
- ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه
- رجل يستلقي على وجهه على مقعد في الحديقة.
- الشاب نائم بينما الأم تقود ابنتها إلى الحديقة
model-index:
- name: SentenceTransformer based on UBC-NLP/MARBERTv2
results:
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (ar)
type: mintaka/mmteb-mintaka
config: ar
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: main_score
value: 16.058
- type: map_at_1
value: 8.398
- type: map_at_3
value: 11.681
- type: map_at_5
value: 12.616
- type: map_at_10
value: 13.281
- type: ndcg_at_1
value: 8.398
- type: ndcg_at_3
value: 12.75
- type: ndcg_at_5
value: 14.453
- type: ndcg_at_10
value: 16.058
- type: recall_at_1
value: 8.398
- type: recall_at_3
value: 15.842
- type: recall_at_5
value: 20.018
- type: recall_at_10
value: 24.966
- type: precision_at_1
value: 8.398
- type: precision_at_3
value: 5.281
- type: precision_at_5
value: 4.004
- type: precision_at_10
value: 2.497
- type: mrr_at_1
value: 8.3976
- type: mrr_at_3
value: 11.681
- type: mrr_at_5
value: 12.6161
- type: mrr_at_10
value: 13.2812
- task:
type: Retrieval
dataset:
name: MTEB MIRACLRetrievalHardNegatives (ar)
type: miracl/mmteb-miracl-hardnegatives
config: ar
split: dev
revision: 95c8db7d4a6e9c1d8a60601afd63d553ae20a2eb
metrics:
- type: main_score
value: 15.853
- type: map_at_1
value: 5.867
- type: map_at_3
value: 9.003
- type: map_at_5
value: 10.068
- type: map_at_10
value: 11.294
- type: ndcg_at_1
value: 9.0
- type: ndcg_at_3
value: 11.363
- type: ndcg_at_5
value: 12.986
- type: ndcg_at_10
value: 15.853
- type: recall_at_1
value: 5.867
- type: recall_at_3
value: 12.639
- type: recall_at_5
value: 16.649
- type: recall_at_10
value: 24.422
- type: precision_at_1
value: 9.0
- type: precision_at_3
value: 7.1
- type: precision_at_5
value: 5.82
- type: precision_at_10
value: 4.38
- type: mrr_at_1
value: 9.0
- type: mrr_at_3
value: 13.4667
- type: mrr_at_5
value: 14.6367
- type: mrr_at_10
value: 16.0177
- task:
type: Retrieval
dataset:
name: MTEB MLQARetrieval (ar)
type: mlqa/mmteb-mlqa
config: ar
split: validation
revision: 397ed406c1a7902140303e7faf60fff35b58d285
metrics:
- type: main_score
value: 58.919
- type: map_at_1
value: 44.874
- type: map_at_3
value: 51.902
- type: map_at_5
value: 53.198
- type: map_at_10
value: 54.181
- type: ndcg_at_1
value: 44.874
- type: ndcg_at_3
value: 54.218
- type: ndcg_at_5
value: 56.541
- type: ndcg_at_10
value: 58.919
- type: recall_at_1
value: 44.874
- type: recall_at_3
value: 60.928
- type: recall_at_5
value: 66.538
- type: recall_at_10
value: 73.888
- type: precision_at_1
value: 44.874
- type: precision_at_3
value: 20.309
- type: precision_at_5
value: 13.308
- type: precision_at_10
value: 7.389
- type: mrr_at_1
value: 44.8743
- type: mrr_at_3
value: 51.902
- type: mrr_at_5
value: 53.1979
- type: mrr_at_10
value: 54.1809
- task:
type: Retrieval
dataset:
name: MTEB SadeemQuestionRetrieval (ar)
type: sadeem/mmteb-sadeem
config: default
split: test
revision: 3cb0752b182e5d5d740df547748b06663c8e0bd9
metrics:
- type: main_score
value: 57.068
- type: map_at_1
value: 24.414
- type: map_at_3
value: 45.333
- type: map_at_5
value: 46.695
- type: map_at_10
value: 47.429
- type: ndcg_at_1
value: 24.414
- type: ndcg_at_3
value: 52.828
- type: ndcg_at_5
value: 55.288
- type: ndcg_at_10
value: 57.068
- type: recall_at_1
value: 24.414
- type: recall_at_3
value: 74.725
- type: recall_at_5
value: 80.708
- type: recall_at_10
value: 86.213
- type: precision_at_1
value: 24.414
- type: precision_at_3
value: 24.908
- type: precision_at_5
value: 16.142
- type: precision_at_10
value: 8.621
- type: mrr_at_1
value: 25.2753
- type: mrr_at_3
value: 45.58
- type: mrr_at_5
value: 46.8581
- type: mrr_at_10
value: 47.6414
- task:
type: STS
dataset:
name: MTEB BIOSSES (default)
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 49.25240527202211
- type: cosine_spearman
value: 51.87708566904703
- type: euclidean_pearson
value: 49.790877425774696
- type: euclidean_spearman
value: 51.725274981021855
- type: main_score
value: 51.87708566904703
- type: manhattan_pearson
value: 52.31560776967401
- type: manhattan_spearman
value: 54.28979124658997
- task:
type: STS
dataset:
name: MTEB SICK-R (default)
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 65.81089479351829
- type: cosine_spearman
value: 65.80163441928238
- type: euclidean_pearson
value: 65.2718874370746
- type: euclidean_spearman
value: 65.92429031695988
- type: main_score
value: 65.80163441928238
- type: manhattan_pearson
value: 65.28701419332383
- type: manhattan_spearman
value: 65.94229793651319
- task:
type: STS
dataset:
name: MTEB STS12 (default)
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 65.11346939995998
- type: cosine_spearman
value: 63.00297824477175
- type: euclidean_pearson
value: 63.85320097970942
- type: euclidean_spearman
value: 63.25151047701848
- type: main_score
value: 63.00297824477175
- type: manhattan_pearson
value: 64.40291990853984
- type: manhattan_spearman
value: 63.63497232399945
- task:
type: STS
dataset:
name: MTEB STS13 (default)
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 52.2735823521702
- type: cosine_spearman
value: 52.23198766098021
- type: euclidean_pearson
value: 54.12467577456837
- type: euclidean_spearman
value: 52.40014028261351
- type: main_score
value: 52.23198766098021
- type: manhattan_pearson
value: 54.38052509834607
- type: manhattan_spearman
value: 52.70836595958237
- task:
type: STS
dataset:
name: MTEB STS14 (default)
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 58.55307076840419
- type: cosine_spearman
value: 59.2261024017655
- type: euclidean_pearson
value: 59.55734715751804
- type: euclidean_spearman
value: 60.135899681574834
- type: main_score
value: 59.2261024017655
- type: manhattan_pearson
value: 59.99274396356966
- type: manhattan_spearman
value: 60.44325356503041
- task:
type: STS
dataset:
name: MTEB STS15 (default)
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 68.94418532602707
- type: cosine_spearman
value: 70.01912156519296
- type: euclidean_pearson
value: 71.67028435860581
- type: euclidean_spearman
value: 71.48252471922122
- type: main_score
value: 70.01912156519296
- type: manhattan_pearson
value: 71.9587452337792
- type: manhattan_spearman
value: 71.69160519065173
- task:
type: STS
dataset:
name: MTEB STS16 (default)
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 62.81619254162203
- type: cosine_spearman
value: 64.98814526698425
- type: euclidean_pearson
value: 66.43531796610995
- type: euclidean_spearman
value: 66.53768451143964
- type: main_score
value: 64.98814526698425
- type: manhattan_pearson
value: 66.57822125651369
- type: manhattan_spearman
value: 66.71830390508079
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 81.68055610903552
- type: cosine_spearman
value: 82.18125783448961
- type: euclidean_pearson
value: 80.5422740473486
- type: euclidean_spearman
value: 81.79456727036232
- type: main_score
value: 82.18125783448961
- type: manhattan_pearson
value: 80.43564733654793
- type: manhattan_spearman
value: 81.76103816207625
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 51.33460593849487
- type: cosine_spearman
value: 58.07741072443786
- type: euclidean_pearson
value: 54.26430308336828
- type: euclidean_spearman
value: 58.8384539429318
- type: main_score
value: 58.07741072443786
- type: manhattan_pearson
value: 54.41587176266624
- type: manhattan_spearman
value: 58.831993325957086
- task:
type: STS
dataset:
name: MTEB STSBenchmark (default)
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 61.11956207522431
- type: cosine_spearman
value: 61.16768766134144
- type: euclidean_pearson
value: 64.44141934993837
- type: euclidean_spearman
value: 63.450379593077066
- type: main_score
value: 61.16768766134144
- type: manhattan_pearson
value: 64.43852352892529
- type: manhattan_spearman
value: 63.57630045107761
- task:
type: Summarization
dataset:
name: MTEB SummEval (default)
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 29.583566160417668
- type: cosine_spearman
value: 29.534419950502212
- type: dot_pearson
value: 28.13970643170574
- type: dot_spearman
value: 28.907762267009073
- type: main_score
value: 29.534419950502212
- type: pearson
value: 29.583566160417668
- type: spearman
value: 29.534419950502212
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 768
type: sts-test-768
metrics:
- type: pearson_cosine
value: 0.611168498883907
name: Pearson Cosine
- type: spearman_cosine
value: 0.6116733587939157
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6443687886661206
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6358107360369792
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.644404066642609
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6345893921062774
name: Spearman Euclidean
- type: pearson_dot
value: 0.4723643245352202
name: Pearson Dot
- type: spearman_dot
value: 0.44844519905410135
name: Spearman Dot
- type: pearson_max
value: 0.644404066642609
name: Pearson Max
- type: spearman_max
value: 0.6358107360369792
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 512
type: sts-test-512
metrics:
- type: pearson_cosine
value: 0.6664570291720014
name: Pearson Cosine
- type: spearman_cosine
value: 0.6647687532159875
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6429976947418544
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6334753432753939
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6466249455585532
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6373181315122213
name: Spearman Euclidean
- type: pearson_dot
value: 0.5370129457359227
name: Pearson Dot
- type: spearman_dot
value: 0.5241649973373772
name: Spearman Dot
- type: pearson_max
value: 0.6664570291720014
name: Pearson Max
- type: spearman_max
value: 0.6647687532159875
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 256
type: sts-test-256
metrics:
- type: pearson_cosine
value: 0.6601248277308522
name: Pearson Cosine
- type: spearman_cosine
value: 0.6592739654246011
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6361644543165994
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6250621947417249
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6408426652431157
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6300109524350457
name: Spearman Euclidean
- type: pearson_dot
value: 0.5250513197384045
name: Pearson Dot
- type: spearman_dot
value: 0.5154779060125071
name: Spearman Dot
- type: pearson_max
value: 0.6601248277308522
name: Pearson Max
- type: spearman_max
value: 0.6592739654246011
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 128
type: sts-test-128
metrics:
- type: pearson_cosine
value: 0.6549481034721005
name: Pearson Cosine
- type: spearman_cosine
value: 0.6523201621940143
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6342700090917214
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6226791710099966
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6397224689512541
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6280973341704362
name: Spearman Euclidean
- type: pearson_dot
value: 0.47240889358810917
name: Pearson Dot
- type: spearman_dot
value: 0.4633669926372942
name: Spearman Dot
- type: pearson_max
value: 0.6549481034721005
name: Pearson Max
- type: spearman_max
value: 0.6523201621940143
name: Spearman Max
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test 64
type: sts-test-64
metrics:
- type: pearson_cosine
value: 0.6367217585211098
name: Pearson Cosine
- type: spearman_cosine
value: 0.6370191671711296
name: Spearman Cosine
- type: pearson_manhattan
value: 0.6263730801254332
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.6118927366012856
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.6327699647617465
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.6180184829867724
name: Spearman Euclidean
- type: pearson_dot
value: 0.41169381399943167
name: Pearson Dot
- type: spearman_dot
value: 0.40444222536491986
name: Spearman Dot
- type: pearson_max
value: 0.6367217585211098
name: Pearson Max
- type: spearman_max
value: 0.6370191671711296
name: Spearman Max
---
# SentenceTransformer based on UBC-NLP/MARBERTv2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) on the Omartificial-Intelligence-Space/arabic-n_li-triplet dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [UBC-NLP/MARBERTv2](https://huggingface.co/UBC-NLP/MARBERTv2) <!-- at revision fe88db9db8ccdb0c4e1627495f405c44a5f89066 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- Omartificial-Intelligence-Space/arabic-n_li-triplet
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Omartificial-Intelligence-Space/Marbert-all-nli-triplet")
# Run inference
sentences = [
'يجلس شاب ذو شعر أشقر على الحائط يقرأ جريدة بينما تمر امرأة وفتاة شابة.',
'ذكر شاب ينظر إلى جريدة بينما تمر إمرأتان بجانبه',
'الشاب نائم بينما الأم تقود ابنتها إلى الحديقة',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test-768`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6112 |
| **spearman_cosine** | **0.6117** |
| pearson_manhattan | 0.6444 |
| spearman_manhattan | 0.6358 |
| pearson_euclidean | 0.6444 |
| spearman_euclidean | 0.6346 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4484 |
| pearson_max | 0.6444 |
| spearman_max | 0.6358 |
#### Semantic Similarity
* Dataset: `sts-test-512`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6665 |
| **spearman_cosine** | **0.6648** |
| pearson_manhattan | 0.643 |
| spearman_manhattan | 0.6335 |
| pearson_euclidean | 0.6466 |
| spearman_euclidean | 0.6373 |
| pearson_dot | 0.537 |
| spearman_dot | 0.5242 |
| pearson_max | 0.6665 |
| spearman_max | 0.6648 |
#### Semantic Similarity
* Dataset: `sts-test-256`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6601 |
| **spearman_cosine** | **0.6593** |
| pearson_manhattan | 0.6362 |
| spearman_manhattan | 0.6251 |
| pearson_euclidean | 0.6408 |
| spearman_euclidean | 0.63 |
| pearson_dot | 0.5251 |
| spearman_dot | 0.5155 |
| pearson_max | 0.6601 |
| spearman_max | 0.6593 |
#### Semantic Similarity
* Dataset: `sts-test-128`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.6549 |
| **spearman_cosine** | **0.6523** |
| pearson_manhattan | 0.6343 |
| spearman_manhattan | 0.6227 |
| pearson_euclidean | 0.6397 |
| spearman_euclidean | 0.6281 |
| pearson_dot | 0.4724 |
| spearman_dot | 0.4634 |
| pearson_max | 0.6549 |
| spearman_max | 0.6523 |
#### Semantic Similarity
* Dataset: `sts-test-64`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.6367 |
| **spearman_cosine** | **0.637** |
| pearson_manhattan | 0.6264 |
| spearman_manhattan | 0.6119 |
| pearson_euclidean | 0.6328 |
| spearman_euclidean | 0.618 |
| pearson_dot | 0.4117 |
| spearman_dot | 0.4044 |
| pearson_max | 0.6367 |
| spearman_max | 0.637 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 557,850 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 7.68 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.66 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.47 tokens</li><li>max: 40 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:------------------------------------------------------------|:--------------------------------------------|:------------------------------------|
| <code>شخص على حصان يقفز فوق طائرة معطلة</code> | <code>شخص في الهواء الطلق، على حصان.</code> | <code>شخص في مطعم، يطلب عجة.</code> |
| <code>أطفال يبتسمون و يلوحون للكاميرا</code> | <code>هناك أطفال حاضرون</code> | <code>الاطفال يتجهمون</code> |
| <code>صبي يقفز على لوح التزلج في منتصف الجسر الأحمر.</code> | <code>الفتى يقوم بخدعة التزلج</code> | <code>الصبي يتزلج على الرصيف</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Evaluation Dataset
#### Omartificial-Intelligence-Space/arabic-n_li-triplet
* Dataset: Omartificial-Intelligence-Space/arabic-n_li-triplet
* Size: 6,584 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.78 tokens</li><li>max: 70 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.41 tokens</li><li>max: 29 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 7.95 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|:---------------------------------------------------|
| <code>امرأتان يتعانقان بينما يحملان حزمة</code> | <code>إمرأتان يحملان حزمة</code> | <code>الرجال يتشاجرون خارج مطعم</code> |
| <code>طفلين صغيرين يرتديان قميصاً أزرق، أحدهما يرتدي الرقم 9 والآخر يرتدي الرقم 2 يقفان على خطوات خشبية في الحمام ويغسلان أيديهما في المغسلة.</code> | <code>طفلين يرتديان قميصاً مرقماً يغسلون أيديهم</code> | <code>طفلين يرتديان سترة يذهبان إلى المدرسة</code> |
| <code>رجل يبيع الدونات لعميل خلال معرض عالمي أقيم في مدينة أنجليس</code> | <code>رجل يبيع الدونات لعميل</code> | <code>امرأة تشرب قهوتها في مقهى صغير</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | sts-test-128_spearman_cosine | sts-test-256_spearman_cosine | sts-test-512_spearman_cosine | sts-test-64_spearman_cosine | sts-test-768_spearman_cosine |
|:------:|:----:|:-------------:|:----------------------------:|:----------------------------:|:----------------------------:|:---------------------------:|:----------------------------:|
| 0.0229 | 200 | 25.0771 | - | - | - | - | - |
| 0.0459 | 400 | 9.1435 | - | - | - | - | - |
| 0.0688 | 600 | 8.0492 | - | - | - | - | - |
| 0.0918 | 800 | 7.1378 | - | - | - | - | - |
| 0.1147 | 1000 | 7.6249 | - | - | - | - | - |
| 0.1377 | 1200 | 7.3604 | - | - | - | - | - |
| 0.1606 | 1400 | 6.5783 | - | - | - | - | - |
| 0.1835 | 1600 | 6.4145 | - | - | - | - | - |
| 0.2065 | 1800 | 6.1781 | - | - | - | - | - |
| 0.2294 | 2000 | 6.2375 | - | - | - | - | - |
| 0.2524 | 2200 | 6.2587 | - | - | - | - | - |
| 0.2753 | 2400 | 6.0826 | - | - | - | - | - |
| 0.2983 | 2600 | 6.1514 | - | - | - | - | - |
| 0.3212 | 2800 | 5.6949 | - | - | - | - | - |
| 0.3442 | 3000 | 6.0062 | - | - | - | - | - |
| 0.3671 | 3200 | 5.7551 | - | - | - | - | - |
| 0.3900 | 3400 | 5.658 | - | - | - | - | - |
| 0.4130 | 3600 | 5.7135 | - | - | - | - | - |
| 0.4359 | 3800 | 5.3909 | - | - | - | - | - |
| 0.4589 | 4000 | 5.5068 | - | - | - | - | - |
| 0.4818 | 4200 | 5.2261 | - | - | - | - | - |
| 0.5048 | 4400 | 5.1674 | - | - | - | - | - |
| 0.5277 | 4600 | 5.0427 | - | - | - | - | - |
| 0.5506 | 4800 | 5.3824 | - | - | - | - | - |
| 0.5736 | 5000 | 5.3063 | - | - | - | - | - |
| 0.5965 | 5200 | 5.2174 | - | - | - | - | - |
| 0.6195 | 5400 | 5.2116 | - | - | - | - | - |
| 0.6424 | 5600 | 5.2226 | - | - | - | - | - |
| 0.6654 | 5800 | 5.2051 | - | - | - | - | - |
| 0.6883 | 6000 | 5.204 | - | - | - | - | - |
| 0.7113 | 6200 | 5.154 | - | - | - | - | - |
| 0.7342 | 6400 | 5.0236 | - | - | - | - | - |
| 0.7571 | 6600 | 4.9476 | - | - | - | - | - |
| 0.7801 | 6800 | 4.0164 | - | - | - | - | - |
| 0.8030 | 7000 | 3.5707 | - | - | - | - | - |
| 0.8260 | 7200 | 3.3586 | - | - | - | - | - |
| 0.8489 | 7400 | 3.2376 | - | - | - | - | - |
| 0.8719 | 7600 | 3.0282 | - | - | - | - | - |
| 0.8948 | 7800 | 2.901 | - | - | - | - | - |
| 0.9177 | 8000 | 2.9371 | - | - | - | - | - |
| 0.9407 | 8200 | 2.8362 | - | - | - | - | - |
| 0.9636 | 8400 | 2.8121 | - | - | - | - | - |
| 0.9866 | 8600 | 2.7105 | - | - | - | - | - |
| 1.0 | 8717 | - | 0.6523 | 0.6593 | 0.6648 | 0.6370 | 0.6117 |
### Framework Versions
- Python: 3.9.18
- Sentence Transformers: 3.0.1
- Transformers: 4.40.0
- PyTorch: 2.2.2+cu121
- Accelerate: 0.26.1
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## <span style="color:blue">Acknowledgments</span>
The author would like to thank Prince Sultan University for their invaluable support in this project. Their contributions and resources have been instrumental in the development and fine-tuning of these models.
```markdown
## Citation
If you use the Arabic Matryoshka Embeddings Model, please cite it as follows:
```bibtex
@misc{nacar2024enhancingsemanticsimilarityunderstanding,
title={Enhancing Semantic Similarity Understanding in Arabic NLP with Nested Embedding Learning},
author={Omer Nacar and Anis Koubaa},
year={2024},
eprint={2407.21139},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2407.21139},
} | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES"
] |
empirischtech/Llama-3.1-8B-Instruct-MedQA | empirischtech | null | [
"safetensors",
"llama",
"medical",
"climate",
"biology",
"chemistry",
"en",
"dataset:openlifescienceai/medmcqa",
"dataset:bigbio/med_qa",
"dataset:bigbio/pubmed_qa",
"dataset:empirischtech/med-qa-orpo-dpo",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"region:us"
] | 2025-02-06T11:39:41 | 2025-02-11T14:21:47 | 39 | 1 | ---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
datasets:
- openlifescienceai/medmcqa
- bigbio/med_qa
- bigbio/pubmed_qa
- empirischtech/med-qa-orpo-dpo
language:
- en
license: llama3.1
metrics:
- accuracy
tags:
- medical
- climate
- biology
- chemistry
---
# Llama-3.1-8B Medical Fine-Tuned Model
## Overview
This is a **fine-tuned version of Llama-3.1-8B** trained on a specialized **medical dataset** to enhance accuracy and contextual understanding in healthcare-related queries. The model has been optimized to provide **precise and reliable answers** to medical questions while improving performance in topic tagging and sentiment analysis.
## Features
- **Medical Question Answering**: Improved capability to understand and respond to medical inquiries with domain-specific knowledge.
- **Topic Tagging**: Enhanced ability to categorize medical content into relevant topics for better organization and retrieval.
- **Sentiment Analysis**: Tuned to assess emotional tone in medical discussions, making it useful for patient feedback analysis and clinical communication.
## Use Cases
- **Clinical Decision Support**: Assisting healthcare professionals in retrieving relevant medical insights.
- **Medical Chatbots**: Providing accurate and context-aware responses to patient queries.
- **Healthcare Content Analysis**: Extracting key topics and sentiments from medical literature, patient reviews, and discussions.
## Model Details
- **Base Model**: Llama-3.1-8B
- **Fine-Tuning Dataset**: Curated medical literature, clinical case studies, and healthcare FAQs
- **Task-Specific Training**: Trained with reinforcement learning and domain-specific optimizations
## Installation & Usage
```python
from transformers import AutoModel, AutoTokenizer
model_name = "empirischtech/Llama-3.1-8B-Instruct-MedQA"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModel.from_pretrained(model_name)
# Example usage
text = "What are the symptoms of diabetes?"
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
```
## License
This model is intended for research and educational purposes. Please review the licensing terms before commercial use.
## Acknowledgments
We acknowledge the contributions of medical professionals and researchers who provided valuable insights for fine-tuning this model.
---
**Disclaimer**: This model is not a substitute for professional medical advice. Always consult a healthcare provider for clinical decisions. | [
"QUESTION_ANSWERING"
] | [
"MEDQA"
] |
PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist | PlanTL-GOB-ES | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"biomedical",
"clinical",
"eHR",
"spanish",
"es",
"dataset:PlanTL-GOB-ES/cantemist-ner",
"arxiv:1907.11692",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-07T14:29:07 | 2022-11-15T16:40:59 | 38 | 2 | ---
datasets:
- PlanTL-GOB-ES/cantemist-ner
language:
- es
license: apache-2.0
metrics:
- f1
tags:
- biomedical
- clinical
- eHR
- spanish
widget:
- text: El diagnóstico definitivo de nuestro paciente fue de un Adenocarcinoma de
pulmón cT2a cN3 cM1a Estadio IV (por una única lesión pulmonar contralateral)
PD-L1 90%, EGFR negativo, ALK negativo y ROS-1 negativo.
- text: Durante el ingreso se realiza una TC, observándose un nódulo pulmonar en el
LII y una masa renal derecha indeterminada. Se realiza punción biopsia del nódulo
pulmonar, con hallazgos altamente sospechosos de carcinoma.
- text: Trombosis paraneoplásica con sospecha de hepatocarcinoma por imagen, sobre
hígado cirrótico, en paciente con índice Child-Pugh B.
model-index:
- name: PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist
results:
- task:
type: token-classification
dataset:
name: cantemist-ner
type: PlanTL-GOB-ES/cantemist-ner
metrics:
- type: f1
value: 0.834
name: f1
---
# Spanish RoBERTa-base biomedical model finetuned for the Named Entity Recognition (NER) task on the Cantemist dataset.
## Table of contents
<details>
<summary>Click to expand</summary>
- [Model description](#model-description)
- [Intended uses and limitations](#intended-use)
- [How to use](#how-to-use)
- [Limitations and bias](#limitations-and-bias)
- [Training](#training)
- [Evaluation](#evaluation)
- [Additional information](#additional-information)
- [Author](#author)
- [Contact information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licensing-information)
- [Funding](#funding)
- [Citing information](#citing-information)
- [Disclaimer](#disclaimer)
</details>
## Model description
A fine-tuned version of the [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es) model, a [RoBERTa](https://arxiv.org/abs/1907.11692) base model and has been pre-trained using the largest Spanish biomedical corpus known to date, composed of biomedical documents, clinical cases and EHR documents for a total of 1.1B tokens of clean and deduplicated text processed.
For more details about the corpora and training, check the _bsc-bio-ehr-es_ model card.
## Intended uses and limitations
## How to use
## Limitations and bias
At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.
## Training
The dataset used is [CANTEMIST](https://huggingface.co/datasets/PlanTL-GOB-ES/cantemist-ner), a NER dataset annotated with tumor morphology entities. For further information, check the [official website](https://temu.bsc.es/cantemist/).
## Evaluation
F1 Score: 0.8340
For evaluation details visit our [GitHub repository](https://github.com/PlanTL-GOB-ES/lm-biomedical-clinical-es).
## Additional information
### Author
Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected])
### Contact information
For further information, send an email to <[email protected]>
### Copyright
Copyright by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) (2022)
### Licensing information
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Spanish State Secretariat for Digitalization and Artificial Intelligence (SEDIA) within the framework of the Plan-TL.
### Citing information
If you use these models, please cite our work:
```bibtext
@inproceedings{carrino-etal-2022-pretrained,
title = "Pretrained Biomedical Language Models for Clinical {NLP} in {S}panish",
author = "Carrino, Casimiro Pio and
Llop, Joan and
P{\`a}mies, Marc and
Guti{\'e}rrez-Fandi{\~n}o, Asier and
Armengol-Estap{\'e}, Jordi and
Silveira-Ocampo, Joaqu{\'\i}n and
Valencia, Alfonso and
Gonzalez-Agirre, Aitor and
Villegas, Marta",
booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.bionlp-1.19",
doi = "10.18653/v1/2022.bionlp-1.19",
pages = "193--199",
abstract = "This work presents the first large-scale biomedical Spanish language models trained from scratch, using large biomedical corpora consisting of a total of 1.1B tokens and an EHR corpus of 95M tokens. We compared them against general-domain and other domain-specific models for Spanish on three clinical NER tasks. As main results, our models are superior across the NER tasks, rendering them more convenient for clinical NLP applications. Furthermore, our findings indicate that when enough data is available, pre-training from scratch is better than continual pre-training when tested on clinical tasks, raising an exciting research question about which approach is optimal. Our models and fine-tuning scripts are publicly available at HuggingFace and GitHub.",
}
```
### Disclaimer
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of artificial intelligence.
In no event shall the owner of the models (SEDIA – State Secretariat for digitalization and artificial intelligence) nor the creator (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
Los modelos publicados en este repositorio tienen una finalidad generalista y están a disposición de terceros. Estos modelos pueden tener sesgos y/u otro tipo de distorsiones indeseables.
Cuando terceros desplieguen o proporcionen sistemas y/o servicios a otras partes usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) o se conviertan en usuarios de los modelos, deben tener en cuenta que es su responsabilidad mitigar los riesgos derivados de su uso y, en todo caso, cumplir con la normativa aplicable, incluyendo la normativa en materia de uso de inteligencia artificial.
En ningún caso el propietario de los modelos (SEDIA – Secretaría de Estado de Digitalización e Inteligencia Artificial) ni el creador (BSC – Barcelona Supercomputing Center) serán responsables de los resultados derivados del uso que hagan terceros de estos modelos.
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"CANTEMIST"
] |
soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF | soichisumi | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-28T15:58:26 | 2024-08-28T16:03:46 | 38 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo soichisumi/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
sthuck/gte-Qwen2-7B-instruct-bfloat16 | sthuck | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2",
"feature-extraction",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"custom_code",
"arxiv:2308.03281",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-01T00:17:35 | 2025-01-01T00:30:08 | 38 | 0 | ---
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
## gte-Qwen2-7B-instruct
**gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024).
Recently, the [**Qwen team**](https://huggingface.co/Qwen) released the Qwen2 series models, and we have trained the **gte-Qwen2-7B-instruct** model based on the [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) LLM model. Compared to the [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) model, the **gte-Qwen2-7B-instruct** model uses the same training data and training strategies during the finetuning stage, with the only difference being the upgraded base model to Qwen2-7B. Considering the improvements in the Qwen2 series models compared to the Qwen1.5 series, we can also expect consistent performance enhancements in the embedding models.
The model incorporates several key advancements:
- Integration of bidirectional attention mechanisms, enriching its contextual understanding.
- Instruction tuning, applied solely on the query side for streamlined efficiency
- Comprehensive training across a vast, multilingual text corpus spanning diverse domains and scenarios. This training leverages both weakly supervised and supervised data, ensuring the model's applicability across numerous languages and a wide array of downstream tasks.
## Model Information
- Model Size: 7B
- Embedding Dimension: 3584
- Max Input Tokens: 32k
## Requirements
```
transformers>=4.39.2
flash_attn>=2.5.6
```
## Usage
### Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Alibaba-NLP/gte-Qwen2-7B-instruct", trust_remote_code=True)
# In case you want to reduce the maximum length:
model.max_seq_length = 8192
queries = [
"how much protein should a female eat",
"summit define",
]
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments.",
]
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = (query_embeddings @ document_embeddings.T) * 100
print(scores.tolist())
```
Observe the [config_sentence_transformers.json](config_sentence_transformers.json) to see all pre-built prompt names. Otherwise, you can use `model.encode(queries, prompt="Instruct: ...\nQuery: "` to use a custom prompt of your choice.
### Transformers
```python
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery: {query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'how much protein should a female eat'),
get_detailed_instruct(task, 'summit define')
]
# No need to add instruction for retrieval documents
documents = [
"As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
"Definition of summit for English Language Learners. : 1 the highest point of a mountain : the top of a mountain. : 2 the highest level. : 3 a meeting or series of meetings between the leaders of two or more governments."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
model = AutoModel.from_pretrained('Alibaba-NLP/gte-Qwen2-7B-instruct', trust_remote_code=True)
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(input_texts, max_length=max_length, padding=True, truncation=True, return_tensors='pt')
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Infinity_emb
Usage via [infinity](https://github.com/michaelfeil/infinity), a MIT Licensed inference server.
```
# requires ~16-32GB VRAM NVIDIA Compute Capability >= 8.0
docker run \
-v $PWD/data:/app/.cache --gpus "0" -p "7997":"7997" \
michaelf34/infinity:0.0.68-trt-onnx \
v2 --model-id Alibaba-NLP/gte-Qwen2-7B-instruct --revision "refs/pr/38" --dtype bfloat16 --batch-size 8 --device cuda --engine torch --port 7997 --no-bettertransformer
```
## Evaluation
### MTEB & C-MTEB
You can use the [scripts/eval_mteb.py](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct/blob/main/scripts/eval_mteb.py) to reproduce the following result of **gte-Qwen2-7B-instruct** on MTEB(English)/C-MTEB(Chinese):
| Model Name | MTEB(56) | C-MTEB(35) | MTEB-fr(26) | MTEB-pl(26) |
|:----:|:---------:|:----------:|:----------:|:----------:|
| [bge-base-en-1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | 64.23 | - | - | - |
| [bge-large-en-1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | 63.55 | - | - | - |
| [gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 65.39 | - | - | - |
| [gte-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | 64.11 | - | - | - |
| [mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) | 64.68 | - | - | - |
| [acge_text_embedding](https://huggingface.co/aspire/acge_text_embedding) | - | 69.07 | - | - |
| [stella-mrl-large-zh-v3.5-1792d](https://huggingface.co/infgrad/stella-mrl-large-zh-v3.5-1792d) | - | 68.55 | - | - |
| [gte-large-zh](https://huggingface.co/thenlper/gte-large-zh) | - | 66.72 | - | - |
| [multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 59.45 | 56.21 | - | - |
| [multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 61.50 | 58.81 | - | - |
| [e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) | 66.63 | 60.81 | - | - |
| [gte-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | 67.34 | 69.52 | - | - |
| [NV-Embed-v1](https://huggingface.co/nvidia/NV-Embed-v1) | 69.32 | - | - | - |
| [**gte-Qwen2-7B-instruct**](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | **70.24** | **72.05** | **68.25** | **67.86** |
| gte-Qwen2-1.5B-instruc(https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | 67.16 | 67.65 | 66.60 | 64.04 |
### GTE Models
The gte series models have consistently released two types of models: encoder-only models (based on the BERT architecture) and decode-only models (based on the LLM architecture).
| Models | Language | Max Sequence Length | Dimension | Model Size (Memory Usage, fp32) |
|:-------------------------------------------------------------------------------------:|:--------:|:-----: |:---------:|:-------------------------------:|
| [GTE-large-zh](https://huggingface.co/thenlper/gte-large-zh) | Chinese | 512 | 1024 | 1.25GB |
| [GTE-base-zh](https://huggingface.co/thenlper/gte-base-zh) | Chinese | 512 | 512 | 0.41GB |
| [GTE-small-zh](https://huggingface.co/thenlper/gte-small-zh) | Chinese | 512 | 512 | 0.12GB |
| [GTE-large](https://huggingface.co/thenlper/gte-large) | English | 512 | 1024 | 1.25GB |
| [GTE-base](https://huggingface.co/thenlper/gte-base) | English | 512 | 512 | 0.21GB |
| [GTE-small](https://huggingface.co/thenlper/gte-small) | English | 512 | 384 | 0.10GB |
| [GTE-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) | English | 8192 | 1024 | 1.74GB |
| [GTE-base-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-base-en-v1.5) | English | 8192 | 768 | 0.51GB |
| [GTE-Qwen1.5-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen1.5-7B-instruct) | Multilingual | 32000 | 4096 | 26.45GB |
| [GTE-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) | Multilingual | 32000 | 3584 | 26.45GB |
| [GTE-Qwen2-1.5B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-1.5B-instruct) | Multilingual | 32000 | 1536 | 6.62GB |
## Cloud API Services
In addition to the open-source [GTE](https://huggingface.co/collections/Alibaba-NLP/gte-models-6680f0b13f885cb431e6d469) series models, GTE series models are also available as commercial API services on Alibaba Cloud.
- [Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): Rhree versions of the text embedding models are available: text-embedding-v1/v2/v3, with v3 being the latest API service.
- [ReRank Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-sorting-model/): The gte-rerank model service is available.
Note that the models behind the commercial APIs are not entirely identical to the open-source models.
## Citation
If you find our paper or models helpful, please consider cite:
```
@article{li2023towards,
title={Towards general text embeddings with multi-stage contrastive learning},
author={Li, Zehan and Zhang, Xin and Zhang, Yanzhao and Long, Dingkun and Xie, Pengjun and Zhang, Meishan},
journal={arXiv preprint arXiv:2308.03281},
year={2023}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
maastrichtlawtech/camembert-base-lleqa | maastrichtlawtech | sentence-similarity | [
"sentence-transformers",
"pytorch",
"camembert",
"feature-extraction",
"sentence-similarity",
"fr",
"dataset:maastrichtlawtech/lleqa",
"arxiv:2309.17050",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2023-09-28T13:00:42 | 2023-10-03T10:55:09 | 37 | 2 | ---
datasets:
- maastrichtlawtech/lleqa
language: fr
library_name: sentence-transformers
license: apache-2.0
metrics:
- recall
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
inference: true
widget:
- source_sentence: Je reçois des confidences liées à mon emploi. Qu'est-ce que je
risque si je viole le secret professionnel ?
sentences:
- 'Art. 1 : Les médecins, chirurgiens, officiers de santé, pharmaciens, sages-femmes
et toutes autres personnes dépositaires, par état ou par profession, des secrets
qu''on leur confie, qui, hors le cas où ils sont appelés à rendre témoignage en
justice ou devant une commission d''enquête parlementaire et celui où la loi,
le décret ou l''ordonnance les oblige ou les autoriseà faire connaître ces secrets,
les auront révélés, seront punis d''un emprisonnement d''un an à trois ans et
d''une amende de cent euros à mille euros ou d''une de ces peines seulement.'
- 'Art. 2 : L''allocataire peut demander l''allocation de naissance à partir du
sixième mois de la grossesse et en obtenir le paiement deux mois avant la date
probable de la naissance mentionnée sur le certificat médical à joindre à la demande.L''allocation
de naissance demandée conformément à l''alinéa 1er est due par la caisse d''allocations
familiales, par l''autorité ou par l''établissement public qui serait compétent,
selon le cas, pour payer les allocations familiales à la date à laquelle la demande
de paiement anticipé est introduite.'
- 'Art. 3 : La periode de maternité constitue une période de repos de douze semaines,
ou de treize semainesen cas de naissance multiple, au cours de laquelle la titulaire
ne peut exercer son activité professionnelle habituelle ni aucune autre activité
professionnelle.'
example_title: Example
---
# camembert-base-lleqa
This is a [sentence-transformers](https://www.SBERT.net) model: it maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on the [LLeQA](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset for legal information retrieval in **French**.
## Usage
***
#### Sentence-Transformers
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('maastrichtlawtech/camembert-base-lleqa')
embeddings = model.encode(sentences)
print(embeddings)
```
#### 🤗 Transformers
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('maastrichtlawtech/camembert-base-lleqa')
model = AutoModel.from_pretrained('maastrichtlawtech/camembert-base-lleqa')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print(sentence_embeddings)
```
## Evaluation
***
We evaluate the model on the test set of LLeQA, which consists of 195 legal questions with a knowlegde corpus of 27.9K candidate articles. We report the mean reciprocal rank (MRR), normalized discounted cumulative gainand (NDCG), mean average precision (MAP), and recall at various cut-offs (R@k).
| MRR@10 | NDCG@10 | MAP@10 | R@10 | R@100 | R@500 |
|---------:|----------:|---------:|-------:|--------:|--------:|
| 36.55 | 39.27 | 30.64 | 58.27 | 82.43 | 92.41 |
## Training
***
#### Background
We utilize the [camembert-base](https://huggingface.co/camembert-base) model and fine-tuned it on 9.3K question-article pairs in French. We used a contrastive learning objective: given a short legal question, the model should predict which out of a set of sampled legal articles, was actually paired with it in the dataset. Formally, we compute the cosine similarity from each possible pairs from the batch. We then apply the cross entropy loss with a temperature of 0.05 by comparing with true pairs.
#### Hyperparameters
We trained the model on a single Tesla V100 GPU with 32GBs of memory during 20 epochs (i.e., 5.4k steps) using a batch size of 32. We used the AdamW optimizer with an initial learning rate of 2e-05, weight decay of 0.01, learning rate warmup over the first 50 steps, and linear decay of the learning rate. The sequence length was limited to 384 tokens.
#### Data
We use the [Long-form Legal Question Answering (LLeQA)](https://huggingface.co/datasets/maastrichtlawtech/lleqa) dataset to fine-tune the model. LLeQA is a French native dataset for studying legal information retrieval and question answering. It consists of a knowledge corpus of 27,941 statutory articles collected from the Belgian legislation, and 1,868 legal questions posed by Belgian citizens and labeled by experienced jurists with a comprehensive answer rooted in relevant articles from the corpus.
## Citation
```bibtex
@article{louis2023interpretable,
author = {Louis, Antoine and van Dijck, Gijs and Spanakis, Gerasimos},
title = {Interpretable Long-Form Legal Question Answering with Retrieval-Augmented Large Language Models},
journal = {CoRR},
volume = {abs/2309.17050},
year = {2023},
url = {https://arxiv.org/abs/2309.17050},
eprinttype = {arXiv},
eprint = {2309.17050},
}
```
| [
"QUESTION_ANSWERING"
] | [
"CAS"
] |
bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-AllSoft | bobox | sentence-similarity | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:78183",
"loss:AdaptiveLayerLoss",
"loss:CoSENTLoss",
"loss:GISTEmbedLoss",
"loss:OnlineContrastiveLoss",
"loss:MultipleNegativesSymmetricRankingLoss",
"en",
"dataset:sentence-transformers/all-nli",
"dataset:sentence-transformers/stsb",
"dataset:tals/vitaminc",
"dataset:nyu-mll/glue",
"dataset:allenai/scitail",
"dataset:sentence-transformers/xsum",
"dataset:sentence-transformers/sentence-compression",
"dataset:allenai/sciq",
"dataset:allenai/qasc",
"dataset:allenai/openbookqa",
"dataset:sentence-transformers/msmarco-msmarco-distilbert-base-v3",
"dataset:sentence-transformers/natural-questions",
"dataset:sentence-transformers/trivia-qa",
"dataset:sentence-transformers/quora-duplicates",
"dataset:sentence-transformers/gooaq",
"arxiv:1908.10084",
"arxiv:2402.14776",
"arxiv:2402.16829",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-03T09:54:47 | 2024-07-03T13:09:54 | 37 | 0 | ---
base_model: microsoft/deberta-v3-small
datasets:
- sentence-transformers/all-nli
- sentence-transformers/stsb
- tals/vitaminc
- nyu-mll/glue
- allenai/scitail
- sentence-transformers/xsum
- sentence-transformers/sentence-compression
- allenai/sciq
- allenai/qasc
- allenai/openbookqa
- sentence-transformers/msmarco-msmarco-distilbert-base-v3
- sentence-transformers/natural-questions
- sentence-transformers/trivia-qa
- sentence-transformers/quora-duplicates
- sentence-transformers/gooaq
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:78183
- loss:AdaptiveLayerLoss
- loss:CoSENTLoss
- loss:GISTEmbedLoss
- loss:OnlineContrastiveLoss
- loss:MultipleNegativesSymmetricRankingLoss
widget:
- source_sentence: The X and Y chromosomes in human beings that determine the sex
of an individual.
sentences:
- A glacier leaves behind bare rock when it retreats.
- Prokaryotes are unicellular organisms that lack organelles surrounded by membranes.
- Mammalian sex determination is determined genetically by the presence of chromosomes
identified by the letters x and y.
- source_sentence: Police officer with riot shield stands in front of crowd.
sentences:
- A police officer stands in front of a crowd.
- A pair of people play video games together on a couch.
- People are outside digging a hole.
- source_sentence: A young girl sitting on a white comforter on a bed covered with
clothing, holding a yellow stuffed duck.
sentences:
- A man standing in a room is pointing up.
- A Little girl is enjoying cake outside.
- A yellow duck being held by a girl.
- source_sentence: A teenage girl in winter clothes slides down a decline in a red
sled.
sentences:
- A woman preparing vegetables.
- A girl is sliding on a red sled.
- A person is on a beach.
- source_sentence: How many hymns of Luther were included in the Achtliederbuch?
sentences:
- the ABC News building was renamed Peter Jennings Way in 2006 in honor of the recently
deceased longtime ABC News chief anchor and anchor of World News Tonight.
- In early 2009, Disney–ABC Television Group merged ABC Entertainment and ABC Studios
into a new division, ABC Entertainment Group, which would be responsible for both
its production and broadcasting operations.
- Luther's hymns were included in early Lutheran hymnals and spread the ideas of
the Reformation.
model-index:
- name: SentenceTransformer based on microsoft/deberta-v3-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.7746195773286169
name: Pearson Cosine
- type: spearman_cosine
value: 0.7690423402274569
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7641811345210845
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.754454714808573
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7621768998872902
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7522944339564277
name: Spearman Euclidean
- type: pearson_dot
value: 0.643272843908074
name: Pearson Dot
- type: spearman_dot
value: 0.6187202562345202
name: Spearman Dot
- type: pearson_max
value: 0.7746195773286169
name: Pearson Max
- type: spearman_max
value: 0.7690423402274569
name: Spearman Max
- type: pearson_cosine
value: 0.7408543477349779
name: Pearson Cosine
- type: spearman_cosine
value: 0.7193195268794856
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7347205138738226
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.716277121285963
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7317357204840789
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.7133569462956698
name: Spearman Euclidean
- type: pearson_dot
value: 0.5412116736741877
name: Pearson Dot
- type: spearman_dot
value: 0.5324862690078268
name: Spearman Dot
- type: pearson_max
value: 0.7408543477349779
name: Pearson Max
- type: spearman_max
value: 0.7193195268794856
name: Spearman Max
---
# SentenceTransformer based on microsoft/deberta-v3-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli), [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb), [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc), [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue), [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail), [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail), [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum), [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression), [sciq_pairs](https://huggingface.co/datasets/allenai/sciq), [qasc_pairs](https://huggingface.co/datasets/allenai/qasc), [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa), [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3), [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions), [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa), [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) and [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) <!-- at revision a36c739020e01763fe789b4b85e2df55d6180012 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb)
- [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc)
- [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue)
- [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail)
- [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail)
- [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum)
- [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression)
- [sciq_pairs](https://huggingface.co/datasets/allenai/sciq)
- [qasc_pairs](https://huggingface.co/datasets/allenai/qasc)
- [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa)
- [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3)
- [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa)
- [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-AllSoft")
# Run inference
sentences = [
'How many hymns of Luther were included in the Achtliederbuch?',
"Luther's hymns were included in early Lutheran hymnals and spread the ideas of the Reformation.",
'the ABC News building was renamed Peter Jennings Way in 2006 in honor of the recently deceased longtime ABC News chief anchor and anchor of World News Tonight.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:----------|
| pearson_cosine | 0.7746 |
| **spearman_cosine** | **0.769** |
| pearson_manhattan | 0.7642 |
| spearman_manhattan | 0.7545 |
| pearson_euclidean | 0.7622 |
| spearman_euclidean | 0.7523 |
| pearson_dot | 0.6433 |
| spearman_dot | 0.6187 |
| pearson_max | 0.7746 |
| spearman_max | 0.769 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### nli-pairs
* Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.62 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.46 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------------------------------|:-------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### sts-label
* Dataset: [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.81 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.74 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### vitaminc-pairs
* Dataset: [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc) at [be6febb](https://huggingface.co/datasets/tals/vitaminc/tree/be6febb761b0b2807687e61e0b5282e459df2fa0)
* Size: 3,194 training samples
* Columns: <code>label</code>, <code>sentence1</code>, and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | label | sentence1 | sentence2 |
|:--------|:-----------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>1: 100.00%</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.76 tokens</li><li>max: 75 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 37.3 tokens</li><li>max: 502 tokens</li></ul> |
* Samples:
| label | sentence1 | sentence2 |
|:---------------|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1</code> | <code>The film will be screened in 2200 theaters .</code> | <code>In the United States and Canada , pre-release tracking suggest the film will gross $ 7�8 million from 2,200 theaters in its opening weekend , trailing fellow newcomer 10 Cloverfield Lane ( $ 25�30 million projection ) , but similar t</code> |
| <code>1</code> | <code>Neighbors 2 : Sorority Rising ( film ) scored over 65 % on Rotten Tomatoes .</code> | <code>On Rotten Tomatoes , the film has a rating of 67 % , based on 105 reviews , with an average rating of 5.9/10 .</code> |
| <code>1</code> | <code>Averaged on more than 65 reviews , The Handmaiden scored 94 % .</code> | <code>On Rotten Tomatoes , the film has a rating of 94 % , based on 67 reviews , with an average rating of 8/10 .</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### qnli-contrastive
* Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c)
* Size: 4,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.64 tokens</li><li>max: 30 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 34.57 tokens</li><li>max: 149 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:-----------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>What professors established the importance of Whitehead's work?</code> | <code>Professors such as Wieman, Charles Hartshorne, Bernard Loomer, Bernard Meland, and Daniel Day Williams made Whitehead's philosophy arguably the most important intellectual thread running through the Divinity School.</code> | <code>0</code> |
| <code>When did people start living on the edge of the desert?</code> | <code>It was long believed that the region had been this way since about 1600 BCE, after shifts in the Earth's axis increased temperatures and decreased precipitation.</code> | <code>0</code> |
| <code>What was the title of Gertrude Stein's 1906-1908 book?</code> | <code>Picasso in turn was an important influence on Stein's writing.</code> | <code>0</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "OnlineContrastiveLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### scitail-pairs-qa
* Dataset: [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 4,300 training samples
* Columns: <code>sentence2</code> and <code>sentence1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence2 | sentence1 |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 16.2 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 14.65 tokens</li><li>max: 33 tokens</li></ul> |
* Samples:
| sentence2 | sentence1 |
|:-------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------|
| <code>Ash that enters the air naturally as a result of a volcano eruption is classified as a primary pollutant.</code> | <code>Ash that enters the air naturally as a result of a volcano eruption is classified as what kind of pollutant?</code> |
| <code>Exposure to ultraviolet radiation can increase the amount of pigment in the skin and make it appear darker.</code> | <code>Exposure to what can increase the amount of pigment in the skin and make it appear darker?</code> |
| <code>A lysozyme destroys bacteria by digesting their cell walls.</code> | <code>How does lysozyme destroy bacteria?</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### scitail-pairs-pos
* Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 2,200 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:---------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 23.6 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.23 tokens</li><li>max: 41 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------|
| <code>An atom that gains electrons would be a negative ion.</code> | <code>Atoms that have gained electrons and become negatively charged are called negative ions.</code> |
| <code>Scientists will use data collected during the collisions to explore the particles known as quarks and gluons that make up protons and neutrons.</code> | <code>Protons and neutrons are made of quarks, which are fundamental particles of matter.</code> |
| <code>Watersheds and divides All of the land area whose water drains into a stream system is called the system's watershed.</code> | <code>All of the land drained by a river system is called its basin, or the "wet" term watershed</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### xsum-pairs
* Dataset: [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum) at [788ddaf](https://huggingface.co/datasets/sentence-transformers/xsum/tree/788ddafe04e539956d56b567bc32a036ee7b9206)
* Size: 2,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 350.46 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 27.13 tokens</li><li>max: 70 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>An eyewitness told BBC Persian that the crowds were sharply divided between hardliners and moderates, but it was clear many people had responded to a call from former President Mohammad Khatami to attend the funeral as a show of support for the opposition reform movement.<br>Some were chanting opposition slogans, and others carried placards emphasising Mr Rafsanjani's links to the moderate and reformist camps.<br>"Long live Khatami, Long Live Rouhani. Hashemi, your soul is at peace!" said one banner.<br>"The circle became too closed for the centre," said another, using a quotation from Persian poetry to underline the growing distance in recent years between Mr Rafsanjani and Iran's hardline political establishment.<br>At one stage state television played loud music over its live broadcast of the event in order to drown out opposition slogans being chanted by the crowd.<br>As the official funeral eulogies were relayed to the crowds on the streets, they responded with calls of support for former President Khatami, and opposition leader Mir Hossein Mousavi, and shouts of: "You have the loudspeakers, we have the voice! Shame on you, Shame on State TV!"<br>On Iranian social media the funeral has been the number one topic with many opposition supporters using the hashtag #weallgathered to indicate their support and sympathy.<br>People have been posting photos and videos emphasising the number of opposition supporters out on the streets and showing the opposition slogans which state TV has been trying to obscure.<br>But government supporters have also taken to Twitter to play down the opposition showing at the funeral, accusing them of political opportunism.<br>"A huge army came out of love of the Supreme Leader," wrote a cleric called Sheikh Reza. "While a few foot soldiers came with their cameras to show off."<br>Another conversation engaging many on Twitter involved the wording of the prayers used at the funeral.<br>Did the Supreme Leader Ayatollah Ali Khamenei deliberately leave out a section praising the goodness of the deceased, some opposition supporters asked. And was this a comment on the political tensions between the two?<br>"No," responded another Twitter user, cleric Abbas Zolghadri. "The words of the prayer can be changed. There are no strict rules."<br>He followed this with a poignant photo of an empty grave - "Hashemi's final resting place" was the caption, summing up the sense of loss felt by Iranians of many different political persuasions despite the deep and bitter divisions.</code> | <code>Tehran has seen some of the biggest crowds on the streets since the 2009 "Green Movement" opposition demonstrations, as an estimated 2.5 million people gathered to bid farewell to Akbar Hashemi Rafsanjani, the man universally known as "Hashemi".</code> |
| <code>Mark Evans is retracing the same route across the Rub Al Khali, also known as the "Empty Quarter", taken by Bristol pioneer Bertram Thomas in 1930.<br>The 54-year-old Shropshire-born explorer is leading a three-man team to walk the 800 mile (1,300 km) journey from Salalah, Oman to Doha, Qatar.<br>The trek is expected to take 60 days.<br>The Rub Al Khali desert is considered one of the hottest, driest and most inhospitable places on earth.<br>Nearly two decades after Thomas completed his trek, British explorer and writer Sir Wilfred Thesiger crossed the Empty Quarter - mapping it in detail along the way.<br>60 days<br>To cross the Rub' Al Khali desert<br>* From Salalah in Oman to Doha, Qatar<br>* Walking with camels for 1,300km<br>* Area nearly three times the size of the UK<br>Completed by explorer Bertram Thomas in 1930<br>Bertram Thomas, who hailed from Pill, near Bristol, received telegrams of congratulation from both King George V and Sultan Taimur, then ruler of Oman.<br>He went on to lecture all over the world about the journey and to write a book called Arabia Felix.<br>Unlike Mr Evans, Thomas did not obtain permission for his expedition.<br>He said: "The biggest challenges for Thomas were warring tribes, lack of water in the waterholes and his total dependence on his Omani companion Sheikh Saleh to negotiate their way through the desert.<br>"The biggest challenge for those who wanted to make the crossing in recent decades has been obtaining government permissions to walk through this desolate and unknown territory."</code> | <code>An explorer has embarked on a challenge to become only the third British person in history to cross the largest sand desert in the world.</code> |
| <code>An Olympic gold medallist, he was also three-time world heavyweight champion and took part in some of the most memorable fights in boxing history.<br>He had a professional career spanning 21 years and BBC Sport takes a look at his 61 fights in more detail.</code> | <code>Boxing legend Muhammad Ali, who died at the age of 74, became a sporting icon during his career.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### compression-pairs
* Dataset: [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression) at [605bc91](https://huggingface.co/datasets/sentence-transformers/sentence-compression/tree/605bc91d95631895ba25b6eda51a3cb596976c90)
* Size: 4,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 31.89 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.21 tokens</li><li>max: 28 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|
| <code>The USHL completed an expansion draft on Monday as 10 players who were on the rosters of USHL teams during the 2009-10 season were selected by the League's two newest entries, the Muskegon Lumberjacks and Dubuque Fighting Saints.</code> | <code>USHL completes expansion draft</code> |
| <code>Major League Baseball Commissioner Bud Selig will be speaking at St. Norbert College next month.</code> | <code>Bud Selig to speak at St. Norbert College</code> |
| <code>It's fresh cherry time in Michigan and the best time to enjoy this delicious and nutritious fruit.</code> | <code>It's cherry time</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "MultipleNegativesSymmetricRankingLoss",
"n_layers_per_step": -1,
"last_layer_weight": 1.5,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### sciq_pairs
* Dataset: [sciq_pairs](https://huggingface.co/datasets/allenai/sciq) at [2c94ad3](https://huggingface.co/datasets/allenai/sciq/tree/2c94ad3e1aafab77146f384e23536f97a4849815)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.26 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 84.37 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What type of organism is commonly used in preparation of foods such as cheese and yogurt?</code> | <code>Mesophiles grow best in moderate temperature, typically between 25°C and 40°C (77°F and 104°F). Mesophiles are often found living in or on the bodies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature. Mesophilic organisms have important uses in food preparation, including cheese, yogurt, beer and wine.</code> |
| <code>What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?</code> | <code>Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to southwest or the reverse in the Northern Hemisphere. The winds blow northwest to southeast or the reverse in the southern hemisphere.</code> |
| <code>Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always what?</code> | <code>Summary Changes of state are examples of phase changes, or phase transitions. All phase changes are accompanied by changes in the energy of a system. Changes from a more-ordered state to a less-ordered state (such as a liquid to a gas) areendothermic. Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always exothermic. The conversion of a solid to a liquid is called fusion (or melting). The energy required to melt 1 mol of a substance is its enthalpy of fusion (ΔHfus). The energy change required to vaporize 1 mol of a substance is the enthalpy of vaporization (ΔHvap). The direct conversion of a solid to a gas is sublimation. The amount of energy needed to sublime 1 mol of a substance is its enthalpy of sublimation (ΔHsub) and is the sum of the enthalpies of fusion and vaporization. Plots of the temperature of a substance versus heat added or versus heating time at a constant rate of heating are calledheating curves. Heating curves relate temperature changes to phase transitions. A superheated liquid, a liquid at a temperature and pressure at which it should be a gas, is not stable. A cooling curve is not exactly the reverse of the heating curve because many liquids do not freeze at the expected temperature. Instead, they form a supercooled liquid, a metastable liquid phase that exists below the normal melting point. Supercooled liquids usually crystallize on standing, or adding a seed crystal of the same or another substance can induce crystallization.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### qasc_pairs
* Dataset: [qasc_pairs](https://huggingface.co/datasets/allenai/qasc) at [a34ba20](https://huggingface.co/datasets/allenai/qasc/tree/a34ba204eb9a33b919c10cc08f4f1c8dae5ec070)
* Size: 6,500 training samples
* Columns: <code>id</code>, <code>sentence1</code>, and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | id | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 21.35 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.47 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.55 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
| id | sentence1 | sentence2 |
|:--------------------------------------------|:---------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>3E7TUJ2EGCLQNOV1WEAJ2NN9ROPD9K</code> | <code>What type of water formation is formed by clouds?</code> | <code>beads of water are formed by water vapor condensing. Clouds are made of water vapor.. Beads of water can be formed by clouds.</code> |
| <code>3LS2AMNW5FPNJK3C3PZLZCPX562OQO</code> | <code>Where do beads of water come from?</code> | <code>beads of water are formed by water vapor condensing. Condensation is the change of water vapor to a liquid.. Vapor turning into a liquid leaves behind beads of water</code> |
| <code>3TMFV4NEP8DPIPCI8H9VUFHJG8V8W3</code> | <code>What forms beads of water? </code> | <code>beads of water are formed by water vapor condensing. An example of water vapor is steam.. Steam forms beads of water.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### openbookqa_pairs
* Dataset: [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa) at [388097e](https://huggingface.co/datasets/allenai/openbookqa/tree/388097ea7776314e93a529163e0fea805b8a6454)
* Size: 2,740 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 13.83 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.37 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-------------------------------------------------|:--------------------------------------------------------------------------|
| <code>The sun is responsible for</code> | <code>the sun is the source of energy for physical cycles on Earth</code> |
| <code>When food is reduced in the stomach</code> | <code>digestion is when stomach acid breaks down food</code> |
| <code>Stars are</code> | <code>a star is made of gases</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### msmarco_pairs
* Dataset: [msmarco_pairs](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3) at [28ff31e](https://huggingface.co/datasets/sentence-transformers/msmarco-msmarco-distilbert-base-v3/tree/28ff31e4c97cddd53d298497f766e653f1e666f9)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.61 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 75.09 tokens</li><li>max: 206 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>what are the liberal arts?</code> | <code>liberal arts. 1. the academic course of instruction at a college intended to provide general knowledge and comprising the arts, humanities, natural sciences, and social sciences, as opposed to professional or technical subjects.</code> |
| <code>what is the mechanism of action of fibrinolytic or thrombolytic drugs?</code> | <code>Baillière's Clinical Haematology. 6 Mechanism of action of the thrombolytic agents. 6 Mechanism of action of the thrombolytic agents JEFFREY I. WEITZ Fibrin formed during the haemostatic, inflammatory or tissue repair process serves a temporary role, and must be degraded to restore normal tissue function and structure.</code> |
| <code>what is normal plat count</code> | <code>78 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).The average platelet count is 237,000 per mcL in men and 266,000 per mcL in women.8 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### nq_pairs
* Dataset: [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.77 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 131.57 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> |
| <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
| <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### trivia_pairs
* Dataset: [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 15.16 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 456.87 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Which American-born Sinclair won the Nobel Prize for Literature in 1930?</code> | <code>The Nobel Prize in Literature 1930 The Nobel Prize in Literature 1930 Sinclair Lewis The Nobel Prize in Literature 1930 Sinclair Lewis Prize share: 1/1 The Nobel Prize in Literature 1930 was awarded to Sinclair Lewis "for his vigorous and graphic art of description and his ability to create, with wit and humour, new types of characters". Photos: Copyright © The Nobel Foundation Share this: To cite this page MLA style: "The Nobel Prize in Literature 1930". Nobelprize.org. Nobel Media AB 2014. Web. 18 Jan 2017. <http://www.nobelprize.org/nobel_prizes/literature/laureates/1930/></code> |
| <code>Where in England was Dame Judi Dench born?</code> | <code>Judi Dench - IMDb IMDb Actress | Music Department | Soundtrack Judi Dench was born in York, England, to Eleanora Olive (Jones), who was from Dublin, Ireland, and Reginald Arthur Dench, a doctor from Dorset, England. She attended Mount School in York, and studied at the Central School of Speech and Drama. She has performed with Royal Shakespeare Company, the National Theatre, and at Old Vic Theatre. She is a ... See full bio » Born: a list of 35 people created 02 Jul 2011 a list of 35 people created 19 Apr 2012 a list of 35 people created 28 May 2014 a list of 25 people created 05 Aug 2014 a list of 26 people created 18 May 2015 Do you have a demo reel? Add it to your IMDbPage How much of Judi Dench's work have you seen? User Polls Won 1 Oscar. Another 59 wins & 163 nominations. See more awards » Known For 2016 The Hollow Crown (TV Series) Cecily, Duchess of York 2015 The Vote (TV Movie) Christine Metcalfe - Total War (1996) ... Narrator (voice) - Stalemate (1996) ... Narrator (voice) 1992 The Torch (TV Mini-Series) Aba 1990 Screen One (TV Series) Anne 1989 Behaving Badly (TV Mini-Series) Bridget 1981 BBC2 Playhouse (TV Series) Sister Scarli 1976 Arena (TV Series documentary) Sweetie Simpkins 1973 Ooh La La! (TV Series) Amélie 1966 Court Martial (TV Series) Marthe 1963 Z Cars (TV Series) Elena Collins 1963 Love Story (TV Series) Pat McKendrick 1960 The Terrible Choice (TV Series) Good Angel Music department (1 credit) A Fine Romance (TV Series) (theme sung by - 14 episodes, 1981 - 1983) (theme song sung by - 12 episodes, 1983 - 1984) - A Romantic Meal (1984) ... (theme song sung by) - Problems (1984) ... (theme song sung by) 2013 Fifty Years on Stage (TV Movie) (performer: "Send in the Clowns") 2009 Nine (performer: "Folies Bergère") - What's Wrong with Mrs Bale? (1997) ... (performer: "Raindrops Keep Fallin' On My Head" - uncredited) - Misunderstandings (1993) ... (performer: "Walkin' My Baby Back Home" - uncredited) 1982-1984 A Fine Romance (TV Series) (performer - 2 episodes) - The Telephone Call (1984) ... (performer: "Boogie Woogie Bugle Boy" - uncredited) - Furniture (1982) ... (performer: "Rule, Britannia!" - uncredited) Hide 2009 Waiting in Rhyme (Video short) (special thanks) 2007 Expresso (Short) (special thanks) 1999 Shakespeare in Love and on Film (TV Movie documentary) (thanks - as Dame Judi Dench) Hide 2016 Rio Olympics (TV Mini-Series) Herself 2015 In Conversation (TV Series documentary) Herself 2015 Entertainment Tonight (TV Series) Herself 2015 CBS This Morning (TV Series) Herself - Guest 2015 The Insider (TV Series) Herself 1999-2014 Cinema 3 (TV Series) Herself 2013 Good Day L.A. (TV Series) Herself - Guest 2013 Arena (TV Series documentary) Herself 2013 At the Movies (TV Series) Herself 2013 Shooting Bond (Video documentary) Herself 2013 Bond's Greatest Moments (TV Movie documentary) Herself 2012 Made in Hollywood (TV Series) Herself 1999-2012 Charlie Rose (TV Series) Herself - Guest 2008-2012 This Morning (TV Series) Herself - Guest 2012 The Secrets of Skyfall (TV Short documentary) Herself 2012 Anderson Live (TV Series) Herself 2012 J. Edgar: A Complicated Man (Video documentary short) Herself 2011 The Many Faces of... (TV Series documentary) Herself / Various Characters 2011 Na plovárne (TV Series) Herself 2010 BBC Proms (TV Series) Herself 2010 The South Bank Show Revisited (TV Series documentary) Herself - Episode #6.68 (2009) ... Herself - Guest (as Dame Judi Dench) 2007-2009 Breakfast (TV Series) 2009 Larry King Live (TV Series) Herself - Guest 2009 The One Show (TV Series) Herself 2009 Cranford in Detail (Video documentary short) Herself / Miss Matty Jenkins (as Dame Judi Dench) 2005-2008 The South Bank Show (TV Series documentary) Herself 2008 Tavis Smiley (TV Series) Herself - Guest 2007 ITV News (TV Series) Herself - BAFTA Nominee 2007 The Making of Cranford (Video documentary short) Herself / Miss Matty Jenkyns (as Dame Judi Dench) 2006 Becoming Bond (TV Movie documentary) Herself 2006 Corazón de... (TV Series) Hers</code> |
| <code>In which decade did Billboard magazine first publish and American hit chart?</code> | <code>The US Billboard song chart The US Billboard song chart Search this site with Google Song chart US Billboard The Billboard magazine has published various music charts starting (with sheet music) in 1894, the first "Music Hit Parade" was published in 1936 , the first "Music Popularity Chart" was calculated in 1940 . These charts became less irregular until the weekly "Hot 100" was started in 1958 . The current chart combines sales, airplay and downloads. A music collector that calls himself Bullfrog has been consolidating the complete chart from 1894 to the present day. he has published this information in a comprehenive spreadsheet (which can be obtained at bullfrogspond.com/ ). The Bullfrog data assigns each song a unique identifier, something like "1968_076" (which just happens to be the Bee Gees song "I've Gotta Get A Message To You"). This "Whitburn Number" is provided to match with the books of Joel Whitburn and consists of the year and a ranking within the year. A song that first entered the charts in December and has a long run is listed the following year. This numbering scheme means that songs which are still in the charts cannot be assigned a final id, because their ranking might change. So the definitive listing for a year cannot be final until about April. In our listing we only use songs with finalised IDs, this means that every year we have to wait until last year's entries are finalised before using them. (Source bullfrogspond.com/ , the original version used here was 20090808 with extra data from: the 2009 data from 20091219 the 2010 data from 20110305 the 2011 data from 20120929 the 2012 data from 20130330 the 2013 data from 20150328 The 20150328 data was the last one produced before the Billboard company forced the data to be withdrawn. As far as we know there are no more recent data sets available. This pattern of obtaining the data for a particular year in the middle of the following one comes from the way that the Bullfrog project generates the identifier for a song (what they call the "Prefix" in the spreadsheet). Recent entries are identified with keys like "2015-008" while older ones have keys like "2013_177". In the second case the underscore is significant, it indicates that this was the 177th biggest song released in 2013. Now, of course, during the year no one knows where a particular song will rank, so the underscore names can't be assigned until every song from a particular year has dropped out of the charts, so recent records are temporarily assigned a name with a dash. In about May of the following year the rankings are calculated and the final identifiers are assigned. That is why we at the Turret can only grab this data retrospectively. Attributes The original spreadsheet has a number of attributes, we have limited our attention to just a few of them: 134 9 The songs with the most entries on the chart were White Christmas (with 33 versions and a total of 110 weeks) and Stardust (with 19 and a total of 106 weeks). position The peak position that songs reached in the charts should show an smooth curve from number one down to the lowest position. This chart has more songs in the lower peak positions than one would expect. Before 1991 the profile of peak positions was exactly as you would expect, that year Billboard introduced the concept of "Recurrent" tracks, that is they removed any track from the chart which had spent more than twenty weeks in the chart and had fallen to the lower positions. weeks The effect of the "Recurrent" process, by which tracks are removed if they have spent at least twenty weeks in the chart and have fallen to the lower reaches, can clearly be seen in the strange spike in this attribute. This "adjustment" was intended to promote newer songs and ensure the chart does not become "stale". In fact since it was introduced in 1991 the length of long chart runs has increased, this might reflect the more conscious efforts of record companies to "game" the charts by controlling release times and promotions, or it coul</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### quora_pairs
* Dataset: [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 4,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.53 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.68 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------|
| <code>Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?</code> | <code>I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?</code> |
| <code>How can I be a good geologist?</code> | <code>What should I do to be a great geologist?</code> |
| <code>How do I read and find my YouTube comments?</code> | <code>How can I see all my Youtube comments?</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### gooaq_pairs
* Dataset: [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 6,500 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.6 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 57.74 tokens</li><li>max: 127 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>is toprol xl the same as metoprolol?</code> | <code>Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.</code> |
| <code>are you experienced cd steve hoffman?</code> | <code>The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.</code> |
| <code>how are babushka dolls made?</code> | <code>Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
### Evaluation Datasets
#### nli-pairs
* Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 750 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.61 tokens</li><li>max: 51 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.71 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### scitail-pairs-pos
* Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 750 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 5 tokens</li><li>mean: 22.43 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.3 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>0: ~50.00%</li><li>1: ~50.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>An introduction to atoms and elements, compounds, atomic structure and bonding, the molecule and chemical reactions.</code> | <code>Replace another in a molecule happens to atoms during a substitution reaction.</code> | <code>0</code> |
| <code>Wavelength The distance between two consecutive points on a sinusoidal wave that are in phase;</code> | <code>Wavelength is the distance between two corresponding points of adjacent waves called.</code> | <code>1</code> |
| <code>humans normally have 23 pairs of chromosomes.</code> | <code>Humans typically have 23 pairs pairs of chromosomes.</code> | <code>1</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "GISTEmbedLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
#### qnli-contrastive
* Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c)
* Size: 750 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.15 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 36.98 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>What came into force after the new constitution was herald?</code> | <code>As of that day, the new constitution heralding the Second Republic came into force.</code> | <code>0</code> |
| <code>What is the first major city in the stream of the Rhine?</code> | <code>The most important tributaries in this area are the Ill below of Strasbourg, the Neckar in Mannheim and the Main across from Mainz.</code> | <code>0</code> |
| <code>What is the minimum required if you want to teach in Canada?</code> | <code>In most provinces a second Bachelor's Degree such as a Bachelor of Education is required to become a qualified teacher.</code> | <code>0</code> |
* Loss: [<code>AdaptiveLayerLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#adaptivelayerloss) with these parameters:
```json
{
"loss": "OnlineContrastiveLoss",
"n_layers_per_step": -1,
"last_layer_weight": 2,
"prior_layers_weight": 0.1,
"kl_div_weight": 0.5,
"kl_temperature": 1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 28
- `per_device_eval_batch_size`: 18
- `learning_rate`: 2e-05
- `weight_decay`: 1e-06
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine_with_restarts
- `lr_scheduler_kwargs`: {'num_cycles': 3}
- `warmup_ratio`: 0.25
- `save_safetensors`: False
- `fp16`: True
- `push_to_hub`: True
- `hub_model_id`: bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-2-checkpoints-tmp
- `hub_strategy`: checkpoint
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 28
- `per_device_eval_batch_size`: 18
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 1e-06
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine_with_restarts
- `lr_scheduler_kwargs`: {'num_cycles': 3}
- `warmup_ratio`: 0.25
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: False
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-2-checkpoints-tmp
- `hub_strategy`: checkpoint
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | nli-pairs loss | qnli-contrastive loss | scitail-pairs-pos loss | sts-test_spearman_cosine |
|:------:|:----:|:-------------:|:--------------:|:---------------------:|:----------------------:|:------------------------:|
| 0 | 0 | - | - | - | - | 0.4188 |
| 0.0253 | 71 | 9.7048 | - | - | - | - |
| 0.0503 | 141 | - | 7.9860 | 8.4771 | 6.6165 | - |
| 0.0507 | 142 | 8.6743 | - | - | - | - |
| 0.0760 | 213 | 8.101 | - | - | - | - |
| 0.1006 | 282 | - | 6.8505 | 7.5583 | 4.4099 | - |
| 0.1014 | 284 | 7.5594 | - | - | - | - |
| 0.1267 | 355 | 6.3548 | - | - | - | - |
| 0.1510 | 423 | - | 5.2238 | 6.2964 | 2.3430 | - |
| 0.1520 | 426 | 5.869 | - | - | - | - |
| 0.1774 | 497 | 5.1134 | - | - | - | - |
| 0.2013 | 564 | - | 4.5785 | 5.6786 | 1.8733 | - |
| 0.2027 | 568 | 5.1262 | - | - | - | - |
| 0.2281 | 639 | 3.7625 | - | - | - | - |
| 0.2516 | 705 | - | 3.9531 | 5.1247 | 1.6374 | - |
| 0.2534 | 710 | 4.5256 | - | - | - | - |
| 0.2787 | 781 | 3.8572 | - | - | - | - |
| 0.3019 | 846 | - | 3.5362 | 4.5487 | 1.5215 | - |
| 0.3041 | 852 | 3.9294 | - | - | - | - |
| 0.3294 | 923 | 3.281 | - | - | - | - |
| 0.3522 | 987 | - | 3.1562 | 3.7942 | 1.4236 | - |
| 0.3547 | 994 | 3.2531 | - | - | - | - |
| 0.3801 | 1065 | 3.9305 | - | - | - | - |
| 0.4026 | 1128 | - | 2.7059 | 3.4370 | 1.2689 | - |
| 0.4054 | 1136 | 3.0324 | - | - | - | - |
| 0.4308 | 1207 | 3.3544 | - | - | - | - |
| 0.4529 | 1269 | - | 2.5396 | 3.0366 | 1.2415 | - |
| 0.4561 | 1278 | 3.2331 | - | - | - | - |
| 0.4814 | 1349 | 3.1913 | - | - | - | - |
| 0.5032 | 1410 | - | 2.2846 | 2.7076 | 1.1422 | - |
| 0.5068 | 1420 | 2.7389 | - | - | - | - |
| 0.5321 | 1491 | 2.9541 | - | - | - | - |
| 0.5535 | 1551 | - | 2.1732 | 2.3780 | 1.2127 | - |
| 0.5575 | 1562 | 3.0911 | - | - | - | - |
| 0.5828 | 1633 | 2.932 | - | - | - | - |
| 0.6039 | 1692 | - | 2.0257 | 1.9252 | 1.1056 | - |
| 0.6081 | 1704 | 3.082 | - | - | - | - |
| 0.6335 | 1775 | 3.0328 | - | - | - | - |
| 0.6542 | 1833 | - | 1.9588 | 2.0366 | 1.1187 | - |
| 0.6588 | 1846 | 2.9508 | - | - | - | - |
| 0.6842 | 1917 | 2.7445 | - | - | - | - |
| 0.7045 | 1974 | - | 1.8310 | 1.9980 | 1.0991 | - |
| 0.7095 | 1988 | 2.8922 | - | - | - | - |
| 0.7348 | 2059 | 2.7352 | - | - | - | - |
| 0.7548 | 2115 | - | 1.7650 | 1.5015 | 1.1103 | - |
| 0.7602 | 2130 | 3.2009 | - | - | - | - |
| 0.7855 | 2201 | 2.6261 | - | - | - | - |
| 0.8051 | 2256 | - | 1.6932 | 1.6964 | 1.0409 | - |
| 0.8108 | 2272 | 2.6623 | - | - | - | - |
| 0.8362 | 2343 | 2.8281 | - | - | - | - |
| 0.8555 | 2397 | - | 1.6844 | 1.7854 | 1.0300 | - |
| 0.8615 | 2414 | 2.3096 | - | - | - | - |
| 0.8869 | 2485 | 2.4088 | - | - | - | - |
| 0.9058 | 2538 | - | 1.6698 | 1.8310 | 1.0275 | - |
| 0.9122 | 2556 | 2.6051 | - | - | - | - |
| 0.9375 | 2627 | 2.972 | - | - | - | - |
| 0.9561 | 2679 | - | 1.6643 | 1.8173 | 1.0215 | - |
| 0.9629 | 2698 | 2.4207 | - | - | - | - |
| 0.9882 | 2769 | 2.2772 | - | - | - | - |
| 1.0064 | 2820 | - | 1.7130 | 1.7650 | 1.0496 | - |
| 1.0136 | 2840 | 2.6348 | - | - | - | - |
| 1.0389 | 2911 | 2.8271 | - | - | - | - |
| 1.0567 | 2961 | - | 1.6939 | 2.1074 | 0.9858 | - |
| 1.0642 | 2982 | 2.5215 | - | - | - | - |
| 1.0896 | 3053 | 2.7442 | - | - | - | - |
| 1.1071 | 3102 | - | 1.6633 | 1.5590 | 0.9903 | - |
| 1.1149 | 3124 | 2.6155 | - | - | - | - |
| 1.1403 | 3195 | 2.7053 | - | - | - | - |
| 1.1574 | 3243 | - | 1.6242 | 1.6429 | 0.9740 | - |
| 1.1656 | 3266 | 2.9191 | - | - | - | - |
| 1.1909 | 3337 | 2.1112 | - | - | - | - |
| 1.2077 | 3384 | - | 1.6535 | 1.6226 | 0.9516 | - |
| 1.2163 | 3408 | 2.3519 | - | - | - | - |
| 1.2416 | 3479 | 1.9416 | - | - | - | - |
| 1.2580 | 3525 | - | 1.6103 | 1.6530 | 0.9357 | - |
| 1.2670 | 3550 | 2.0859 | - | - | - | - |
| 1.2923 | 3621 | 2.0109 | - | - | - | - |
| 1.3084 | 3666 | - | 1.5773 | 1.4672 | 0.9155 | - |
| 1.3176 | 3692 | 2.366 | - | - | - | - |
| 1.3430 | 3763 | 1.5532 | - | - | - | - |
| 1.3587 | 3807 | - | 1.5514 | 1.4451 | 0.8979 | - |
| 1.3683 | 3834 | 1.9982 | - | - | - | - |
| 1.3936 | 3905 | 2.4375 | - | - | - | - |
| 1.4090 | 3948 | - | 1.5254 | 1.4050 | 0.8834 | - |
| 1.4190 | 3976 | 1.7548 | - | - | - | - |
| 1.4443 | 4047 | 2.2272 | - | - | - | - |
| 1.4593 | 4089 | - | 1.5186 | 1.3720 | 0.8835 | - |
| 1.4697 | 4118 | 2.2145 | - | - | - | - |
| 1.4950 | 4189 | 1.8696 | - | - | - | - |
| 1.5096 | 4230 | - | 1.5696 | 1.0682 | 0.9336 | - |
| 1.5203 | 4260 | 1.4926 | - | - | - | - |
| 1.5457 | 4331 | 2.1193 | - | - | - | - |
| 1.5600 | 4371 | - | 1.5469 | 0.8180 | 0.9663 | - |
| 1.5710 | 4402 | 2.0298 | - | - | - | - |
| 1.5964 | 4473 | 1.9959 | - | - | - | - |
| 1.6103 | 4512 | - | 1.4656 | 1.1725 | 0.8815 | - |
| 1.6217 | 4544 | 2.3452 | - | - | - | - |
| 1.6470 | 4615 | 1.9529 | - | - | - | - |
| 1.6606 | 4653 | - | 1.4709 | 1.1081 | 0.9079 | - |
| 1.6724 | 4686 | 1.7932 | - | - | - | - |
| 1.6977 | 4757 | 2.1881 | - | - | - | - |
| 1.7109 | 4794 | - | 1.4526 | 0.9851 | 0.9167 | - |
| 1.7231 | 4828 | 2.1128 | - | - | - | - |
| 1.7484 | 4899 | 2.4772 | - | - | - | - |
| 1.7612 | 4935 | - | 1.4204 | 0.8683 | 0.8896 | - |
| 1.7737 | 4970 | 2.4336 | - | - | - | - |
| 1.7991 | 5041 | 1.9101 | - | - | - | - |
| 1.8116 | 5076 | - | 1.3821 | 1.0420 | 0.8538 | - |
| 1.8244 | 5112 | 2.3882 | - | - | - | - |
| 1.8498 | 5183 | 2.2165 | - | - | - | - |
| 1.8619 | 5217 | - | 1.3747 | 1.0753 | 0.8580 | - |
| 1.8751 | 5254 | 1.6554 | - | - | - | - |
| 1.9004 | 5325 | 2.3828 | - | - | - | - |
| 1.9122 | 5358 | - | 1.3637 | 1.0699 | 0.8557 | - |
| 1.9258 | 5396 | 2.3499 | - | - | - | - |
| 1.9511 | 5467 | 2.3972 | - | - | - | - |
| 1.9625 | 5499 | - | 1.3583 | 1.0596 | 0.8536 | - |
| 1.9764 | 5538 | 1.931 | - | - | - | - |
| 2.0 | 5604 | - | 1.3586 | 1.0555 | 0.8543 | 0.7193 |
</details>
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2
- Accelerate: 0.30.1
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### AdaptiveLayerLoss
```bibtex
@misc{li20242d,
title={2D Matryoshka Sentence Embeddings},
author={Xianming Li and Zongxi Li and Jing Li and Haoran Xie and Qing Li},
year={2024},
eprint={2402.14776},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | [
"MEDAL",
"SCIQ",
"SCITAIL"
] |
tuong-nguyen-prd/Phi-3.1-mini-128k-instruct | tuong-nguyen-prd | text-generation | [
"safetensors",
"phi3",
"nlp",
"code",
"text-generation",
"conversational",
"custom_code",
"en",
"license:mit",
"region:us"
] | 2024-08-15T15:09:41 | 2024-08-15T15:30:17 | 37 | 0 | ---
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"SUMMARIZATION"
] | [
"MEDQA"
] |
soichisumi/multilingual-e5-large-Q8_0-GGUF | soichisumi | feature-extraction | [
"sentence-transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"feature-extraction",
"llama-cpp",
"gguf-my-repo",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:quantized:intfloat/multilingual-e5-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-08-25T06:27:53 | 2024-08-25T06:28:01 | 37 | 0 | ---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
license: mit
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- feature-extraction
- sentence-transformers
- llama-cpp
- gguf-my-repo
model-index:
- name: multilingual-e5-large
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 79.05970149253731
- type: ap
value: 43.486574390835635
- type: f1
value: 73.32700092140148
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (de)
type: mteb/amazon_counterfactual
config: de
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 71.22055674518201
- type: ap
value: 81.55756710830498
- type: f1
value: 69.28271787752661
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en-ext)
type: mteb/amazon_counterfactual
config: en-ext
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 80.41979010494754
- type: ap
value: 29.34879922376344
- type: f1
value: 67.62475449011278
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (ja)
type: mteb/amazon_counterfactual
config: ja
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 77.8372591006424
- type: ap
value: 26.557560591210738
- type: f1
value: 64.96619417368707
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.489875
- type: ap
value: 90.98758636917603
- type: f1
value: 93.48554819717332
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.564
- type: f1
value: 46.75122173518047
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (de)
type: mteb/amazon_reviews_multi
config: de
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 45.400000000000006
- type: f1
value: 44.17195682400632
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (es)
type: mteb/amazon_reviews_multi
config: es
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 43.068
- type: f1
value: 42.38155696855596
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.89
- type: f1
value: 40.84407321682663
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (ja)
type: mteb/amazon_reviews_multi
config: ja
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 40.120000000000005
- type: f1
value: 39.522976223819114
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 38.832
- type: f1
value: 38.0392533394713
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 30.725
- type: map_at_10
value: 46.055
- type: map_at_100
value: 46.900999999999996
- type: map_at_1000
value: 46.911
- type: map_at_3
value: 41.548
- type: map_at_5
value: 44.297
- type: mrr_at_1
value: 31.152
- type: mrr_at_10
value: 46.231
- type: mrr_at_100
value: 47.07
- type: mrr_at_1000
value: 47.08
- type: mrr_at_3
value: 41.738
- type: mrr_at_5
value: 44.468999999999994
- type: ndcg_at_1
value: 30.725
- type: ndcg_at_10
value: 54.379999999999995
- type: ndcg_at_100
value: 58.138
- type: ndcg_at_1000
value: 58.389
- type: ndcg_at_3
value: 45.156
- type: ndcg_at_5
value: 50.123
- type: precision_at_1
value: 30.725
- type: precision_at_10
value: 8.087
- type: precision_at_100
value: 0.9769999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 18.54
- type: precision_at_5
value: 13.542000000000002
- type: recall_at_1
value: 30.725
- type: recall_at_10
value: 80.868
- type: recall_at_100
value: 97.653
- type: recall_at_1000
value: 99.57300000000001
- type: recall_at_3
value: 55.619
- type: recall_at_5
value: 67.71000000000001
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 44.30960650674069
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.427074197498996
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.28270056031872
- type: mrr
value: 74.38332673789738
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 84.05942144105269
- type: cos_sim_spearman
value: 82.51212105850809
- type: euclidean_pearson
value: 81.95639829909122
- type: euclidean_spearman
value: 82.3717564144213
- type: manhattan_pearson
value: 81.79273425468256
- type: manhattan_spearman
value: 82.20066817871039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (de-en)
type: mteb/bucc-bitext-mining
config: de-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.46764091858039
- type: f1
value: 99.37717466945023
- type: precision
value: 99.33194154488518
- type: recall
value: 99.46764091858039
- task:
type: BitextMining
dataset:
name: MTEB BUCC (fr-en)
type: mteb/bucc-bitext-mining
config: fr-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 98.29407880255337
- type: f1
value: 98.11248073959938
- type: precision
value: 98.02443319392472
- type: recall
value: 98.29407880255337
- task:
type: BitextMining
dataset:
name: MTEB BUCC (ru-en)
type: mteb/bucc-bitext-mining
config: ru-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 97.79009352268791
- type: f1
value: 97.5176076665512
- type: precision
value: 97.38136473848286
- type: recall
value: 97.79009352268791
- task:
type: BitextMining
dataset:
name: MTEB BUCC (zh-en)
type: mteb/bucc-bitext-mining
config: zh-en
split: test
revision: d51519689f32196a32af33b075a01d0e7c51e252
metrics:
- type: accuracy
value: 99.26276987888363
- type: f1
value: 99.20133403545726
- type: precision
value: 99.17500438827453
- type: recall
value: 99.26276987888363
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.72727272727273
- type: f1
value: 84.67672206031433
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 35.34220182511161
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 33.4987096128766
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 25.558249999999997
- type: map_at_10
value: 34.44425000000001
- type: map_at_100
value: 35.59833333333333
- type: map_at_1000
value: 35.706916666666665
- type: map_at_3
value: 31.691749999999995
- type: map_at_5
value: 33.252916666666664
- type: mrr_at_1
value: 30.252666666666666
- type: mrr_at_10
value: 38.60675
- type: mrr_at_100
value: 39.42666666666666
- type: mrr_at_1000
value: 39.48408333333334
- type: mrr_at_3
value: 36.17441666666665
- type: mrr_at_5
value: 37.56275
- type: ndcg_at_1
value: 30.252666666666666
- type: ndcg_at_10
value: 39.683
- type: ndcg_at_100
value: 44.68541666666667
- type: ndcg_at_1000
value: 46.94316666666668
- type: ndcg_at_3
value: 34.961749999999995
- type: ndcg_at_5
value: 37.215666666666664
- type: precision_at_1
value: 30.252666666666666
- type: precision_at_10
value: 6.904166666666667
- type: precision_at_100
value: 1.0989999999999995
- type: precision_at_1000
value: 0.14733333333333334
- type: precision_at_3
value: 16.037666666666667
- type: precision_at_5
value: 11.413583333333333
- type: recall_at_1
value: 25.558249999999997
- type: recall_at_10
value: 51.13341666666666
- type: recall_at_100
value: 73.08366666666667
- type: recall_at_1000
value: 88.79483333333334
- type: recall_at_3
value: 37.989083333333326
- type: recall_at_5
value: 43.787833333333325
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 10.338
- type: map_at_10
value: 18.360000000000003
- type: map_at_100
value: 19.942
- type: map_at_1000
value: 20.134
- type: map_at_3
value: 15.174000000000001
- type: map_at_5
value: 16.830000000000002
- type: mrr_at_1
value: 23.257
- type: mrr_at_10
value: 33.768
- type: mrr_at_100
value: 34.707
- type: mrr_at_1000
value: 34.766000000000005
- type: mrr_at_3
value: 30.977
- type: mrr_at_5
value: 32.528
- type: ndcg_at_1
value: 23.257
- type: ndcg_at_10
value: 25.733
- type: ndcg_at_100
value: 32.288
- type: ndcg_at_1000
value: 35.992000000000004
- type: ndcg_at_3
value: 20.866
- type: ndcg_at_5
value: 22.612
- type: precision_at_1
value: 23.257
- type: precision_at_10
value: 8.124
- type: precision_at_100
value: 1.518
- type: precision_at_1000
value: 0.219
- type: precision_at_3
value: 15.679000000000002
- type: precision_at_5
value: 12.117
- type: recall_at_1
value: 10.338
- type: recall_at_10
value: 31.154
- type: recall_at_100
value: 54.161
- type: recall_at_1000
value: 75.21900000000001
- type: recall_at_3
value: 19.427
- type: recall_at_5
value: 24.214
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.498
- type: map_at_10
value: 19.103
- type: map_at_100
value: 27.375
- type: map_at_1000
value: 28.981
- type: map_at_3
value: 13.764999999999999
- type: map_at_5
value: 15.950000000000001
- type: mrr_at_1
value: 65.5
- type: mrr_at_10
value: 74.53800000000001
- type: mrr_at_100
value: 74.71799999999999
- type: mrr_at_1000
value: 74.725
- type: mrr_at_3
value: 72.792
- type: mrr_at_5
value: 73.554
- type: ndcg_at_1
value: 53.37499999999999
- type: ndcg_at_10
value: 41.286
- type: ndcg_at_100
value: 45.972
- type: ndcg_at_1000
value: 53.123
- type: ndcg_at_3
value: 46.172999999999995
- type: ndcg_at_5
value: 43.033
- type: precision_at_1
value: 65.5
- type: precision_at_10
value: 32.725
- type: precision_at_100
value: 10.683
- type: precision_at_1000
value: 1.978
- type: precision_at_3
value: 50
- type: precision_at_5
value: 41.349999999999994
- type: recall_at_1
value: 8.498
- type: recall_at_10
value: 25.070999999999998
- type: recall_at_100
value: 52.383
- type: recall_at_1000
value: 74.91499999999999
- type: recall_at_3
value: 15.207999999999998
- type: recall_at_5
value: 18.563
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.5
- type: f1
value: 41.93833713984145
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 67.914
- type: map_at_10
value: 78.10000000000001
- type: map_at_100
value: 78.333
- type: map_at_1000
value: 78.346
- type: map_at_3
value: 76.626
- type: map_at_5
value: 77.627
- type: mrr_at_1
value: 72.74199999999999
- type: mrr_at_10
value: 82.414
- type: mrr_at_100
value: 82.511
- type: mrr_at_1000
value: 82.513
- type: mrr_at_3
value: 81.231
- type: mrr_at_5
value: 82.065
- type: ndcg_at_1
value: 72.74199999999999
- type: ndcg_at_10
value: 82.806
- type: ndcg_at_100
value: 83.677
- type: ndcg_at_1000
value: 83.917
- type: ndcg_at_3
value: 80.305
- type: ndcg_at_5
value: 81.843
- type: precision_at_1
value: 72.74199999999999
- type: precision_at_10
value: 10.24
- type: precision_at_100
value: 1.089
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 31.268
- type: precision_at_5
value: 19.706000000000003
- type: recall_at_1
value: 67.914
- type: recall_at_10
value: 92.889
- type: recall_at_100
value: 96.42699999999999
- type: recall_at_1000
value: 97.92
- type: recall_at_3
value: 86.21
- type: recall_at_5
value: 90.036
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 22.166
- type: map_at_10
value: 35.57
- type: map_at_100
value: 37.405
- type: map_at_1000
value: 37.564
- type: map_at_3
value: 30.379
- type: map_at_5
value: 33.324
- type: mrr_at_1
value: 43.519000000000005
- type: mrr_at_10
value: 51.556000000000004
- type: mrr_at_100
value: 52.344
- type: mrr_at_1000
value: 52.373999999999995
- type: mrr_at_3
value: 48.868
- type: mrr_at_5
value: 50.319
- type: ndcg_at_1
value: 43.519000000000005
- type: ndcg_at_10
value: 43.803
- type: ndcg_at_100
value: 50.468999999999994
- type: ndcg_at_1000
value: 53.111
- type: ndcg_at_3
value: 38.893
- type: ndcg_at_5
value: 40.653
- type: precision_at_1
value: 43.519000000000005
- type: precision_at_10
value: 12.253
- type: precision_at_100
value: 1.931
- type: precision_at_1000
value: 0.242
- type: precision_at_3
value: 25.617
- type: precision_at_5
value: 19.383
- type: recall_at_1
value: 22.166
- type: recall_at_10
value: 51.6
- type: recall_at_100
value: 76.574
- type: recall_at_1000
value: 92.192
- type: recall_at_3
value: 34.477999999999994
- type: recall_at_5
value: 41.835
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.041
- type: map_at_10
value: 62.961999999999996
- type: map_at_100
value: 63.79899999999999
- type: map_at_1000
value: 63.854
- type: map_at_3
value: 59.399
- type: map_at_5
value: 61.669
- type: mrr_at_1
value: 78.082
- type: mrr_at_10
value: 84.321
- type: mrr_at_100
value: 84.49600000000001
- type: mrr_at_1000
value: 84.502
- type: mrr_at_3
value: 83.421
- type: mrr_at_5
value: 83.977
- type: ndcg_at_1
value: 78.082
- type: ndcg_at_10
value: 71.229
- type: ndcg_at_100
value: 74.10900000000001
- type: ndcg_at_1000
value: 75.169
- type: ndcg_at_3
value: 66.28699999999999
- type: ndcg_at_5
value: 69.084
- type: precision_at_1
value: 78.082
- type: precision_at_10
value: 14.993
- type: precision_at_100
value: 1.7239999999999998
- type: precision_at_1000
value: 0.186
- type: precision_at_3
value: 42.737
- type: precision_at_5
value: 27.843
- type: recall_at_1
value: 39.041
- type: recall_at_10
value: 74.96300000000001
- type: recall_at_100
value: 86.199
- type: recall_at_1000
value: 93.228
- type: recall_at_3
value: 64.105
- type: recall_at_5
value: 69.608
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.23160000000001
- type: ap
value: 85.5674856808308
- type: f1
value: 90.18033354786317
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 24.091
- type: map_at_10
value: 36.753
- type: map_at_100
value: 37.913000000000004
- type: map_at_1000
value: 37.958999999999996
- type: map_at_3
value: 32.818999999999996
- type: map_at_5
value: 35.171
- type: mrr_at_1
value: 24.742
- type: mrr_at_10
value: 37.285000000000004
- type: mrr_at_100
value: 38.391999999999996
- type: mrr_at_1000
value: 38.431
- type: mrr_at_3
value: 33.440999999999995
- type: mrr_at_5
value: 35.75
- type: ndcg_at_1
value: 24.742
- type: ndcg_at_10
value: 43.698
- type: ndcg_at_100
value: 49.145
- type: ndcg_at_1000
value: 50.23800000000001
- type: ndcg_at_3
value: 35.769
- type: ndcg_at_5
value: 39.961999999999996
- type: precision_at_1
value: 24.742
- type: precision_at_10
value: 6.7989999999999995
- type: precision_at_100
value: 0.95
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 15.096000000000002
- type: precision_at_5
value: 11.183
- type: recall_at_1
value: 24.091
- type: recall_at_10
value: 65.068
- type: recall_at_100
value: 89.899
- type: recall_at_1000
value: 98.16
- type: recall_at_3
value: 43.68
- type: recall_at_5
value: 53.754999999999995
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.66621067031465
- type: f1
value: 93.49622853272142
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (de)
type: mteb/mtop_domain
config: de
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 91.94702733164272
- type: f1
value: 91.17043441745282
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (es)
type: mteb/mtop_domain
config: es
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.20146764509674
- type: f1
value: 91.98359080555608
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.99780770435328
- type: f1
value: 89.19746342724068
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (hi)
type: mteb/mtop_domain
config: hi
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 89.78486912871998
- type: f1
value: 89.24578823628642
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (th)
type: mteb/mtop_domain
config: th
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 88.74502712477394
- type: f1
value: 89.00297573881542
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 77.9046967624259
- type: f1
value: 59.36787125785957
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (de)
type: mteb/mtop_intent
config: de
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.5280360664976
- type: f1
value: 57.17723440888718
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (es)
type: mteb/mtop_intent
config: es
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.44029352901934
- type: f1
value: 54.052855531072964
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 70.5606013153774
- type: f1
value: 52.62215934386531
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (hi)
type: mteb/mtop_intent
config: hi
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 73.11581211903908
- type: f1
value: 52.341291845645465
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (th)
type: mteb/mtop_intent
config: th
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 74.28933092224233
- type: f1
value: 57.07918745504911
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (af)
type: mteb/amazon_massive_intent
config: af
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.38063214525892
- type: f1
value: 59.46463723443009
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (am)
type: mteb/amazon_massive_intent
config: am
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 56.06926698049766
- type: f1
value: 52.49084283283562
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ar)
type: mteb/amazon_massive_intent
config: ar
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.74983187626093
- type: f1
value: 56.960640620165904
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (az)
type: mteb/amazon_massive_intent
config: az
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.86550100874243
- type: f1
value: 62.47370548140688
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (bn)
type: mteb/amazon_massive_intent
config: bn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.971082716879636
- type: f1
value: 61.03812421957381
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (cy)
type: mteb/amazon_massive_intent
config: cy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 54.98318762609282
- type: f1
value: 51.51207916008392
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (da)
type: mteb/amazon_massive_intent
config: da
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.45527908540686
- type: f1
value: 66.16631905400318
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (de)
type: mteb/amazon_massive_intent
config: de
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.32750504371216
- type: f1
value: 66.16755288646591
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (el)
type: mteb/amazon_massive_intent
config: el
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.09213180901143
- type: f1
value: 66.95654394661507
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.75588433086752
- type: f1
value: 71.79973779656923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (es)
type: mteb/amazon_massive_intent
config: es
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.49428379287154
- type: f1
value: 68.37494379215734
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fa)
type: mteb/amazon_massive_intent
config: fa
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.90921318090115
- type: f1
value: 66.79517376481645
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fi)
type: mteb/amazon_massive_intent
config: fi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.12104909213181
- type: f1
value: 67.29448842879584
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.34095494283793
- type: f1
value: 67.01134288992947
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (he)
type: mteb/amazon_massive_intent
config: he
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.61264290517822
- type: f1
value: 64.68730512660757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hi)
type: mteb/amazon_massive_intent
config: hi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.79757901815738
- type: f1
value: 65.24938539425598
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hu)
type: mteb/amazon_massive_intent
config: hu
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.68728984532616
- type: f1
value: 67.0487169762553
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (hy)
type: mteb/amazon_massive_intent
config: hy
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.07464694014795
- type: f1
value: 59.183532276789286
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (id)
type: mteb/amazon_massive_intent
config: id
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.04707464694015
- type: f1
value: 67.66829629003848
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (is)
type: mteb/amazon_massive_intent
config: is
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.42434431741762
- type: f1
value: 59.01617226544757
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (it)
type: mteb/amazon_massive_intent
config: it
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.53127101546738
- type: f1
value: 68.10033760906255
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ja)
type: mteb/amazon_massive_intent
config: ja
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 72.50504371217215
- type: f1
value: 69.74931103158923
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (jv)
type: mteb/amazon_massive_intent
config: jv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 57.91190316072628
- type: f1
value: 54.05551136648796
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ka)
type: mteb/amazon_massive_intent
config: ka
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 51.78211163416275
- type: f1
value: 49.874888544058535
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (km)
type: mteb/amazon_massive_intent
config: km
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 47.017484868863484
- type: f1
value: 44.53364263352014
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (kn)
type: mteb/amazon_massive_intent
config: kn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.16207128446537
- type: f1
value: 59.01185692320829
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ko)
type: mteb/amazon_massive_intent
config: ko
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.42501681237391
- type: f1
value: 67.13169450166086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (lv)
type: mteb/amazon_massive_intent
config: lv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0780094149294
- type: f1
value: 64.41720167850707
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ml)
type: mteb/amazon_massive_intent
config: ml
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.57162071284466
- type: f1
value: 62.414138683804424
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (mn)
type: mteb/amazon_massive_intent
config: mn
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.71149966375252
- type: f1
value: 58.594805125087234
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ms)
type: mteb/amazon_massive_intent
config: ms
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.03900470746471
- type: f1
value: 63.87937257883887
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (my)
type: mteb/amazon_massive_intent
config: my
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 60.8776059179556
- type: f1
value: 57.48587618059131
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nb)
type: mteb/amazon_massive_intent
config: nb
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87895090786819
- type: f1
value: 66.8141299430347
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (nl)
type: mteb/amazon_massive_intent
config: nl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.45057162071285
- type: f1
value: 67.46444039673516
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.546738399462
- type: f1
value: 68.63640876702655
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pt)
type: mteb/amazon_massive_intent
config: pt
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 70.72965702757229
- type: f1
value: 68.54119560379115
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ro)
type: mteb/amazon_massive_intent
config: ro
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.35574983187625
- type: f1
value: 65.88844917691927
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ru)
type: mteb/amazon_massive_intent
config: ru
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.70477471418964
- type: f1
value: 69.19665697061978
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sl)
type: mteb/amazon_massive_intent
config: sl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 67.0880968392737
- type: f1
value: 64.76962317666086
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sq)
type: mteb/amazon_massive_intent
config: sq
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.18493611297916
- type: f1
value: 62.49984559035371
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sv)
type: mteb/amazon_massive_intent
config: sv
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.75857431069265
- type: f1
value: 69.20053687623418
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (sw)
type: mteb/amazon_massive_intent
config: sw
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 58.500336247478145
- type: f1
value: 55.2972398687929
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ta)
type: mteb/amazon_massive_intent
config: ta
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 62.68997982515132
- type: f1
value: 59.36848202755348
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (te)
type: mteb/amazon_massive_intent
config: te
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 63.01950235373235
- type: f1
value: 60.09351954625423
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (th)
type: mteb/amazon_massive_intent
config: th
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 68.29186281102892
- type: f1
value: 67.57860496703447
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tl)
type: mteb/amazon_massive_intent
config: tl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.77471418964357
- type: f1
value: 61.913983147713836
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (tr)
type: mteb/amazon_massive_intent
config: tr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.87222595830532
- type: f1
value: 66.03679033708141
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (ur)
type: mteb/amazon_massive_intent
config: ur
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 64.04505716207127
- type: f1
value: 61.28569169817908
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (vi)
type: mteb/amazon_massive_intent
config: vi
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 69.38466711499663
- type: f1
value: 67.20532357036844
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 71.12306657700067
- type: f1
value: 68.91251226588182
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-TW)
type: mteb/amazon_massive_intent
config: zh-TW
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 66.20040349697378
- type: f1
value: 66.02657347714175
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (af)
type: mteb/amazon_massive_scenario
config: af
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.73907195696032
- type: f1
value: 66.98484521791418
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (am)
type: mteb/amazon_massive_scenario
config: am
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 60.58843308675185
- type: f1
value: 58.95591723092005
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ar)
type: mteb/amazon_massive_scenario
config: ar
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.22730329522528
- type: f1
value: 66.0894499712115
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (az)
type: mteb/amazon_massive_scenario
config: az
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.48285137861465
- type: f1
value: 65.21963176785157
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (bn)
type: mteb/amazon_massive_scenario
config: bn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.74714189643578
- type: f1
value: 66.8212192745412
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (cy)
type: mteb/amazon_massive_scenario
config: cy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 59.09213180901143
- type: f1
value: 56.70735546356339
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (da)
type: mteb/amazon_massive_scenario
config: da
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.05716207128448
- type: f1
value: 74.8413712365364
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (de)
type: mteb/amazon_massive_scenario
config: de
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.69737726967047
- type: f1
value: 74.7664341963
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (el)
type: mteb/amazon_massive_scenario
config: el
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.90383322125084
- type: f1
value: 73.59201554448323
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.51176866173503
- type: f1
value: 77.46104434577758
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (es)
type: mteb/amazon_massive_scenario
config: es
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.31069266980496
- type: f1
value: 74.61048660675635
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fa)
type: mteb/amazon_massive_scenario
config: fa
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.95225285810356
- type: f1
value: 72.33160006574627
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fi)
type: mteb/amazon_massive_scenario
config: fi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.12373907195696
- type: f1
value: 73.20921012557481
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.86684599865501
- type: f1
value: 73.82348774610831
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (he)
type: mteb/amazon_massive_scenario
config: he
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.40215198386012
- type: f1
value: 71.11945183971858
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hi)
type: mteb/amazon_massive_scenario
config: hi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 72.12844653665098
- type: f1
value: 71.34450495911766
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hu)
type: mteb/amazon_massive_scenario
config: hu
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.52252858103566
- type: f1
value: 73.98878711342999
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (hy)
type: mteb/amazon_massive_scenario
config: hy
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.93611297915265
- type: f1
value: 63.723200467653385
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (id)
type: mteb/amazon_massive_scenario
config: id
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.11903160726295
- type: f1
value: 73.82138439467096
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (is)
type: mteb/amazon_massive_scenario
config: is
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.15198386012105
- type: f1
value: 66.02172193802167
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (it)
type: mteb/amazon_massive_scenario
config: it
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.32414256893072
- type: f1
value: 74.30943421170574
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ja)
type: mteb/amazon_massive_scenario
config: ja
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.46805648957633
- type: f1
value: 77.62808409298209
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (jv)
type: mteb/amazon_massive_scenario
config: jv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.318762609280434
- type: f1
value: 62.094284066075076
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ka)
type: mteb/amazon_massive_scenario
config: ka
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 58.34902488231338
- type: f1
value: 57.12893860987984
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (km)
type: mteb/amazon_massive_scenario
config: km
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 50.88433086751849
- type: f1
value: 48.2272350802058
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (kn)
type: mteb/amazon_massive_scenario
config: kn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.4425016812374
- type: f1
value: 64.61463095996173
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ko)
type: mteb/amazon_massive_scenario
config: ko
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.04707464694015
- type: f1
value: 75.05099199098998
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (lv)
type: mteb/amazon_massive_scenario
config: lv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.50437121721586
- type: f1
value: 69.83397721096314
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ml)
type: mteb/amazon_massive_scenario
config: ml
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.94283792871553
- type: f1
value: 68.8704663703913
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (mn)
type: mteb/amazon_massive_scenario
config: mn
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 64.79488903833222
- type: f1
value: 63.615424063345436
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ms)
type: mteb/amazon_massive_scenario
config: ms
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 69.88231338264963
- type: f1
value: 68.57892302593237
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (my)
type: mteb/amazon_massive_scenario
config: my
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.248150638870214
- type: f1
value: 61.06680605338809
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nb)
type: mteb/amazon_massive_scenario
config: nb
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.84196368527236
- type: f1
value: 74.52566464968763
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (nl)
type: mteb/amazon_massive_scenario
config: nl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.8285137861466
- type: f1
value: 74.8853197608802
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 74.13248150638869
- type: f1
value: 74.3982040999179
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pt)
type: mteb/amazon_massive_scenario
config: pt
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.49024882313383
- type: f1
value: 73.82153848368573
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ro)
type: mteb/amazon_massive_scenario
config: ro
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.72158708809684
- type: f1
value: 71.85049433180541
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ru)
type: mteb/amazon_massive_scenario
config: ru
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 75.137861466039
- type: f1
value: 75.37628348188467
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sl)
type: mteb/amazon_massive_scenario
config: sl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.86953597848016
- type: f1
value: 71.87537624521661
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sq)
type: mteb/amazon_massive_scenario
config: sq
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 70.27572293207801
- type: f1
value: 68.80017302344231
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sv)
type: mteb/amazon_massive_scenario
config: sv
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.09952925353059
- type: f1
value: 76.07992707688408
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (sw)
type: mteb/amazon_massive_scenario
config: sw
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 63.140551445864155
- type: f1
value: 61.73855010331415
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ta)
type: mteb/amazon_massive_scenario
config: ta
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.27774041694687
- type: f1
value: 64.83664868894539
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (te)
type: mteb/amazon_massive_scenario
config: te
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 66.69468728984533
- type: f1
value: 64.76239666920868
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (th)
type: mteb/amazon_massive_scenario
config: th
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.44653665097512
- type: f1
value: 73.14646052013873
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tl)
type: mteb/amazon_massive_scenario
config: tl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.71351714862139
- type: f1
value: 66.67212180163382
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (tr)
type: mteb/amazon_massive_scenario
config: tr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.9946200403497
- type: f1
value: 73.87348793725525
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (ur)
type: mteb/amazon_massive_scenario
config: ur
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 68.15400134498992
- type: f1
value: 67.09433241421094
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (vi)
type: mteb/amazon_massive_scenario
config: vi
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 73.11365164761264
- type: f1
value: 73.59502539433753
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 76.82582380632145
- type: f1
value: 76.89992945316313
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-TW)
type: mteb/amazon_massive_scenario
config: zh-TW
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.81237390719569
- type: f1
value: 72.36499770986265
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.480506569594695
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 29.71252128004552
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.421396787056548
- type: mrr
value: 32.48155274872267
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.595
- type: map_at_10
value: 12.642000000000001
- type: map_at_100
value: 15.726
- type: map_at_1000
value: 17.061999999999998
- type: map_at_3
value: 9.125
- type: map_at_5
value: 10.866000000000001
- type: mrr_at_1
value: 43.344
- type: mrr_at_10
value: 52.227999999999994
- type: mrr_at_100
value: 52.898999999999994
- type: mrr_at_1000
value: 52.944
- type: mrr_at_3
value: 49.845
- type: mrr_at_5
value: 51.115
- type: ndcg_at_1
value: 41.949999999999996
- type: ndcg_at_10
value: 33.995
- type: ndcg_at_100
value: 30.869999999999997
- type: ndcg_at_1000
value: 39.487
- type: ndcg_at_3
value: 38.903999999999996
- type: ndcg_at_5
value: 37.236999999999995
- type: precision_at_1
value: 43.344
- type: precision_at_10
value: 25.480000000000004
- type: precision_at_100
value: 7.672
- type: precision_at_1000
value: 2.028
- type: precision_at_3
value: 36.636
- type: precision_at_5
value: 32.632
- type: recall_at_1
value: 5.595
- type: recall_at_10
value: 16.466
- type: recall_at_100
value: 31.226
- type: recall_at_1000
value: 62.778999999999996
- type: recall_at_3
value: 9.931
- type: recall_at_5
value: 12.884
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.414
- type: map_at_10
value: 56.754000000000005
- type: map_at_100
value: 57.457
- type: map_at_1000
value: 57.477999999999994
- type: map_at_3
value: 52.873999999999995
- type: map_at_5
value: 55.175
- type: mrr_at_1
value: 45.278
- type: mrr_at_10
value: 59.192
- type: mrr_at_100
value: 59.650000000000006
- type: mrr_at_1000
value: 59.665
- type: mrr_at_3
value: 56.141
- type: mrr_at_5
value: 57.998000000000005
- type: ndcg_at_1
value: 45.278
- type: ndcg_at_10
value: 64.056
- type: ndcg_at_100
value: 66.89
- type: ndcg_at_1000
value: 67.364
- type: ndcg_at_3
value: 56.97
- type: ndcg_at_5
value: 60.719
- type: precision_at_1
value: 45.278
- type: precision_at_10
value: 9.994
- type: precision_at_100
value: 1.165
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 25.512
- type: precision_at_5
value: 17.509
- type: recall_at_1
value: 40.414
- type: recall_at_10
value: 83.596
- type: recall_at_100
value: 95.72
- type: recall_at_1000
value: 99.24
- type: recall_at_3
value: 65.472
- type: recall_at_5
value: 74.039
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.352
- type: map_at_10
value: 84.369
- type: map_at_100
value: 85.02499999999999
- type: map_at_1000
value: 85.04
- type: map_at_3
value: 81.42399999999999
- type: map_at_5
value: 83.279
- type: mrr_at_1
value: 81.05
- type: mrr_at_10
value: 87.401
- type: mrr_at_100
value: 87.504
- type: mrr_at_1000
value: 87.505
- type: mrr_at_3
value: 86.443
- type: mrr_at_5
value: 87.10799999999999
- type: ndcg_at_1
value: 81.04
- type: ndcg_at_10
value: 88.181
- type: ndcg_at_100
value: 89.411
- type: ndcg_at_1000
value: 89.507
- type: ndcg_at_3
value: 85.28099999999999
- type: ndcg_at_5
value: 86.888
- type: precision_at_1
value: 81.04
- type: precision_at_10
value: 13.406
- type: precision_at_100
value: 1.5350000000000001
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.31
- type: precision_at_5
value: 24.54
- type: recall_at_1
value: 70.352
- type: recall_at_10
value: 95.358
- type: recall_at_100
value: 99.541
- type: recall_at_1000
value: 99.984
- type: recall_at_3
value: 87.111
- type: recall_at_5
value: 91.643
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.54068723291946
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 63.216287629895994
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.023000000000001
- type: map_at_10
value: 10.071
- type: map_at_100
value: 11.892
- type: map_at_1000
value: 12.196
- type: map_at_3
value: 7.234
- type: map_at_5
value: 8.613999999999999
- type: mrr_at_1
value: 19.900000000000002
- type: mrr_at_10
value: 30.516
- type: mrr_at_100
value: 31.656000000000002
- type: mrr_at_1000
value: 31.723000000000003
- type: mrr_at_3
value: 27.400000000000002
- type: mrr_at_5
value: 29.270000000000003
- type: ndcg_at_1
value: 19.900000000000002
- type: ndcg_at_10
value: 17.474
- type: ndcg_at_100
value: 25.020999999999997
- type: ndcg_at_1000
value: 30.728
- type: ndcg_at_3
value: 16.588
- type: ndcg_at_5
value: 14.498
- type: precision_at_1
value: 19.900000000000002
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 2.011
- type: precision_at_1000
value: 0.33899999999999997
- type: precision_at_3
value: 15.667
- type: precision_at_5
value: 12.839999999999998
- type: recall_at_1
value: 4.023000000000001
- type: recall_at_10
value: 18.497
- type: recall_at_100
value: 40.8
- type: recall_at_1000
value: 68.812
- type: recall_at_3
value: 9.508
- type: recall_at_5
value: 12.983
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.967008785134
- type: cos_sim_spearman
value: 80.23142141101837
- type: euclidean_pearson
value: 81.20166064704539
- type: euclidean_spearman
value: 80.18961335654585
- type: manhattan_pearson
value: 81.13925443187625
- type: manhattan_spearman
value: 80.07948723044424
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.94262461316023
- type: cos_sim_spearman
value: 80.01596278563865
- type: euclidean_pearson
value: 83.80799622922581
- type: euclidean_spearman
value: 79.94984954947103
- type: manhattan_pearson
value: 83.68473841756281
- type: manhattan_spearman
value: 79.84990707951822
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 80.57346443146068
- type: cos_sim_spearman
value: 81.54689837570866
- type: euclidean_pearson
value: 81.10909881516007
- type: euclidean_spearman
value: 81.56746243261762
- type: manhattan_pearson
value: 80.87076036186582
- type: manhattan_spearman
value: 81.33074987964402
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 79.54733787179849
- type: cos_sim_spearman
value: 77.72202105610411
- type: euclidean_pearson
value: 78.9043595478849
- type: euclidean_spearman
value: 77.93422804309435
- type: manhattan_pearson
value: 78.58115121621368
- type: manhattan_spearman
value: 77.62508135122033
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.59880017237558
- type: cos_sim_spearman
value: 89.31088630824758
- type: euclidean_pearson
value: 88.47069261564656
- type: euclidean_spearman
value: 89.33581971465233
- type: manhattan_pearson
value: 88.40774264100956
- type: manhattan_spearman
value: 89.28657485627835
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 84.08055117917084
- type: cos_sim_spearman
value: 85.78491813080304
- type: euclidean_pearson
value: 84.99329155500392
- type: euclidean_spearman
value: 85.76728064677287
- type: manhattan_pearson
value: 84.87947428989587
- type: manhattan_spearman
value: 85.62429454917464
- task:
type: STS
dataset:
name: MTEB STS17 (ko-ko)
type: mteb/sts17-crosslingual-sts
config: ko-ko
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 82.14190939287384
- type: cos_sim_spearman
value: 82.27331573306041
- type: euclidean_pearson
value: 81.891896953716
- type: euclidean_spearman
value: 82.37695542955998
- type: manhattan_pearson
value: 81.73123869460504
- type: manhattan_spearman
value: 82.19989168441421
- task:
type: STS
dataset:
name: MTEB STS17 (ar-ar)
type: mteb/sts17-crosslingual-sts
config: ar-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 76.84695301843362
- type: cos_sim_spearman
value: 77.87790986014461
- type: euclidean_pearson
value: 76.91981583106315
- type: euclidean_spearman
value: 77.88154772749589
- type: manhattan_pearson
value: 76.94953277451093
- type: manhattan_spearman
value: 77.80499230728604
- task:
type: STS
dataset:
name: MTEB STS17 (en-ar)
type: mteb/sts17-crosslingual-sts
config: en-ar
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 75.44657840482016
- type: cos_sim_spearman
value: 75.05531095119674
- type: euclidean_pearson
value: 75.88161755829299
- type: euclidean_spearman
value: 74.73176238219332
- type: manhattan_pearson
value: 75.63984765635362
- type: manhattan_spearman
value: 74.86476440770737
- task:
type: STS
dataset:
name: MTEB STS17 (en-de)
type: mteb/sts17-crosslingual-sts
config: en-de
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.64700140524133
- type: cos_sim_spearman
value: 86.16014210425672
- type: euclidean_pearson
value: 86.49086860843221
- type: euclidean_spearman
value: 86.09729326815614
- type: manhattan_pearson
value: 86.43406265125513
- type: manhattan_spearman
value: 86.17740150939994
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.91170098764921
- type: cos_sim_spearman
value: 88.12437004058931
- type: euclidean_pearson
value: 88.81828254494437
- type: euclidean_spearman
value: 88.14831794572122
- type: manhattan_pearson
value: 88.93442183448961
- type: manhattan_spearman
value: 88.15254630778304
- task:
type: STS
dataset:
name: MTEB STS17 (en-tr)
type: mteb/sts17-crosslingual-sts
config: en-tr
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 72.91390577997292
- type: cos_sim_spearman
value: 71.22979457536074
- type: euclidean_pearson
value: 74.40314008106749
- type: euclidean_spearman
value: 72.54972136083246
- type: manhattan_pearson
value: 73.85687539530218
- type: manhattan_spearman
value: 72.09500771742637
- task:
type: STS
dataset:
name: MTEB STS17 (es-en)
type: mteb/sts17-crosslingual-sts
config: es-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.9301067983089
- type: cos_sim_spearman
value: 80.74989828346473
- type: euclidean_pearson
value: 81.36781301814257
- type: euclidean_spearman
value: 80.9448819964426
- type: manhattan_pearson
value: 81.0351322685609
- type: manhattan_spearman
value: 80.70192121844177
- task:
type: STS
dataset:
name: MTEB STS17 (es-es)
type: mteb/sts17-crosslingual-sts
config: es-es
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.13820465980005
- type: cos_sim_spearman
value: 86.73532498758757
- type: euclidean_pearson
value: 87.21329451846637
- type: euclidean_spearman
value: 86.57863198601002
- type: manhattan_pearson
value: 87.06973713818554
- type: manhattan_spearman
value: 86.47534918791499
- task:
type: STS
dataset:
name: MTEB STS17 (fr-en)
type: mteb/sts17-crosslingual-sts
config: fr-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 85.48720108904415
- type: cos_sim_spearman
value: 85.62221757068387
- type: euclidean_pearson
value: 86.1010129512749
- type: euclidean_spearman
value: 85.86580966509942
- type: manhattan_pearson
value: 86.26800938808971
- type: manhattan_spearman
value: 85.88902721678429
- task:
type: STS
dataset:
name: MTEB STS17 (it-en)
type: mteb/sts17-crosslingual-sts
config: it-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 83.98021347333516
- type: cos_sim_spearman
value: 84.53806553803501
- type: euclidean_pearson
value: 84.61483347248364
- type: euclidean_spearman
value: 85.14191408011702
- type: manhattan_pearson
value: 84.75297588825967
- type: manhattan_spearman
value: 85.33176753669242
- task:
type: STS
dataset:
name: MTEB STS17 (nl-en)
type: mteb/sts17-crosslingual-sts
config: nl-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.51856644893233
- type: cos_sim_spearman
value: 85.27510748506413
- type: euclidean_pearson
value: 85.09886861540977
- type: euclidean_spearman
value: 85.62579245860887
- type: manhattan_pearson
value: 84.93017860464607
- type: manhattan_spearman
value: 85.5063988898453
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.581573200584195
- type: cos_sim_spearman
value: 63.05503590247928
- type: euclidean_pearson
value: 63.652564812602094
- type: euclidean_spearman
value: 62.64811520876156
- type: manhattan_pearson
value: 63.506842893061076
- type: manhattan_spearman
value: 62.51289573046917
- task:
type: STS
dataset:
name: MTEB STS22 (de)
type: mteb/sts22-crosslingual-sts
config: de
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 48.2248801729127
- type: cos_sim_spearman
value: 56.5936604678561
- type: euclidean_pearson
value: 43.98149464089
- type: euclidean_spearman
value: 56.108561882423615
- type: manhattan_pearson
value: 43.86880305903564
- type: manhattan_spearman
value: 56.04671150510166
- task:
type: STS
dataset:
name: MTEB STS22 (es)
type: mteb/sts22-crosslingual-sts
config: es
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.17564527009831
- type: cos_sim_spearman
value: 64.57978560979488
- type: euclidean_pearson
value: 58.8818330154583
- type: euclidean_spearman
value: 64.99214839071281
- type: manhattan_pearson
value: 58.72671436121381
- type: manhattan_spearman
value: 65.10713416616109
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 26.772131864023297
- type: cos_sim_spearman
value: 34.68200792408681
- type: euclidean_pearson
value: 16.68082419005441
- type: euclidean_spearman
value: 34.83099932652166
- type: manhattan_pearson
value: 16.52605949659529
- type: manhattan_spearman
value: 34.82075801399475
- task:
type: STS
dataset:
name: MTEB STS22 (tr)
type: mteb/sts22-crosslingual-sts
config: tr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 54.42415189043831
- type: cos_sim_spearman
value: 63.54594264576758
- type: euclidean_pearson
value: 57.36577498297745
- type: euclidean_spearman
value: 63.111466379158074
- type: manhattan_pearson
value: 57.584543715873885
- type: manhattan_spearman
value: 63.22361054139183
- task:
type: STS
dataset:
name: MTEB STS22 (ar)
type: mteb/sts22-crosslingual-sts
config: ar
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 47.55216762405518
- type: cos_sim_spearman
value: 56.98670142896412
- type: euclidean_pearson
value: 50.15318757562699
- type: euclidean_spearman
value: 56.524941926541906
- type: manhattan_pearson
value: 49.955618528674904
- type: manhattan_spearman
value: 56.37102209240117
- task:
type: STS
dataset:
name: MTEB STS22 (ru)
type: mteb/sts22-crosslingual-sts
config: ru
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 49.20540980338571
- type: cos_sim_spearman
value: 59.9009453504406
- type: euclidean_pearson
value: 49.557749853620535
- type: euclidean_spearman
value: 59.76631621172456
- type: manhattan_pearson
value: 49.62340591181147
- type: manhattan_spearman
value: 59.94224880322436
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 51.508169956576985
- type: cos_sim_spearman
value: 66.82461565306046
- type: euclidean_pearson
value: 56.2274426480083
- type: euclidean_spearman
value: 66.6775323848333
- type: manhattan_pearson
value: 55.98277796300661
- type: manhattan_spearman
value: 66.63669848497175
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 72.86478788045507
- type: cos_sim_spearman
value: 76.7946552053193
- type: euclidean_pearson
value: 75.01598530490269
- type: euclidean_spearman
value: 76.83618917858281
- type: manhattan_pearson
value: 74.68337628304332
- type: manhattan_spearman
value: 76.57480204017773
- task:
type: STS
dataset:
name: MTEB STS22 (de-en)
type: mteb/sts22-crosslingual-sts
config: de-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 55.922619099401984
- type: cos_sim_spearman
value: 56.599362477240774
- type: euclidean_pearson
value: 56.68307052369783
- type: euclidean_spearman
value: 54.28760436777401
- type: manhattan_pearson
value: 56.67763566500681
- type: manhattan_spearman
value: 53.94619541711359
- task:
type: STS
dataset:
name: MTEB STS22 (es-en)
type: mteb/sts22-crosslingual-sts
config: es-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 66.74357206710913
- type: cos_sim_spearman
value: 72.5208244925311
- type: euclidean_pearson
value: 67.49254562186032
- type: euclidean_spearman
value: 72.02469076238683
- type: manhattan_pearson
value: 67.45251772238085
- type: manhattan_spearman
value: 72.05538819984538
- task:
type: STS
dataset:
name: MTEB STS22 (it)
type: mteb/sts22-crosslingual-sts
config: it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 71.25734330033191
- type: cos_sim_spearman
value: 76.98349083946823
- type: euclidean_pearson
value: 73.71642838667736
- type: euclidean_spearman
value: 77.01715504651384
- type: manhattan_pearson
value: 73.61712711868105
- type: manhattan_spearman
value: 77.01392571153896
- task:
type: STS
dataset:
name: MTEB STS22 (pl-en)
type: mteb/sts22-crosslingual-sts
config: pl-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 63.18215462781212
- type: cos_sim_spearman
value: 65.54373266117607
- type: euclidean_pearson
value: 64.54126095439005
- type: euclidean_spearman
value: 65.30410369102711
- type: manhattan_pearson
value: 63.50332221148234
- type: manhattan_spearman
value: 64.3455878104313
- task:
type: STS
dataset:
name: MTEB STS22 (zh-en)
type: mteb/sts22-crosslingual-sts
config: zh-en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.30509221440029
- type: cos_sim_spearman
value: 65.99582704642478
- type: euclidean_pearson
value: 63.43818859884195
- type: euclidean_spearman
value: 66.83172582815764
- type: manhattan_pearson
value: 63.055779168508764
- type: manhattan_spearman
value: 65.49585020501449
- task:
type: STS
dataset:
name: MTEB STS22 (es-it)
type: mteb/sts22-crosslingual-sts
config: es-it
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 59.587830825340404
- type: cos_sim_spearman
value: 68.93467614588089
- type: euclidean_pearson
value: 62.3073527367404
- type: euclidean_spearman
value: 69.69758171553175
- type: manhattan_pearson
value: 61.9074580815789
- type: manhattan_spearman
value: 69.57696375597865
- task:
type: STS
dataset:
name: MTEB STS22 (de-fr)
type: mteb/sts22-crosslingual-sts
config: de-fr
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.143220125577066
- type: cos_sim_spearman
value: 67.78857859159226
- type: euclidean_pearson
value: 55.58225107923733
- type: euclidean_spearman
value: 67.80662907184563
- type: manhattan_pearson
value: 56.24953502726514
- type: manhattan_spearman
value: 67.98262125431616
- task:
type: STS
dataset:
name: MTEB STS22 (de-pl)
type: mteb/sts22-crosslingual-sts
config: de-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 21.826928900322066
- type: cos_sim_spearman
value: 49.578506634400405
- type: euclidean_pearson
value: 27.939890138843214
- type: euclidean_spearman
value: 52.71950519136242
- type: manhattan_pearson
value: 26.39878683847546
- type: manhattan_spearman
value: 47.54609580342499
- task:
type: STS
dataset:
name: MTEB STS22 (fr-pl)
type: mteb/sts22-crosslingual-sts
config: fr-pl
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 57.27603854632001
- type: cos_sim_spearman
value: 50.709255283710995
- type: euclidean_pearson
value: 59.5419024445929
- type: euclidean_spearman
value: 50.709255283710995
- type: manhattan_pearson
value: 59.03256832438492
- type: manhattan_spearman
value: 61.97797868009122
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 85.00757054859712
- type: cos_sim_spearman
value: 87.29283629622222
- type: euclidean_pearson
value: 86.54824171775536
- type: euclidean_spearman
value: 87.24364730491402
- type: manhattan_pearson
value: 86.5062156915074
- type: manhattan_spearman
value: 87.15052170378574
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 82.03549357197389
- type: mrr
value: 95.05437645143527
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 57.260999999999996
- type: map_at_10
value: 66.259
- type: map_at_100
value: 66.884
- type: map_at_1000
value: 66.912
- type: map_at_3
value: 63.685
- type: map_at_5
value: 65.35499999999999
- type: mrr_at_1
value: 60.333000000000006
- type: mrr_at_10
value: 67.5
- type: mrr_at_100
value: 68.013
- type: mrr_at_1000
value: 68.038
- type: mrr_at_3
value: 65.61099999999999
- type: mrr_at_5
value: 66.861
- type: ndcg_at_1
value: 60.333000000000006
- type: ndcg_at_10
value: 70.41
- type: ndcg_at_100
value: 73.10600000000001
- type: ndcg_at_1000
value: 73.846
- type: ndcg_at_3
value: 66.133
- type: ndcg_at_5
value: 68.499
- type: precision_at_1
value: 60.333000000000006
- type: precision_at_10
value: 9.232999999999999
- type: precision_at_100
value: 1.0630000000000002
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 25.667
- type: precision_at_5
value: 17.067
- type: recall_at_1
value: 57.260999999999996
- type: recall_at_10
value: 81.94399999999999
- type: recall_at_100
value: 93.867
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 70.339
- type: recall_at_5
value: 76.25
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74356435643564
- type: cos_sim_ap
value: 93.13411948212683
- type: cos_sim_f1
value: 86.80521991300147
- type: cos_sim_precision
value: 84.00374181478017
- type: cos_sim_recall
value: 89.8
- type: dot_accuracy
value: 99.67920792079208
- type: dot_ap
value: 89.27277565444479
- type: dot_f1
value: 83.9276990718124
- type: dot_precision
value: 82.04393505253104
- type: dot_recall
value: 85.9
- type: euclidean_accuracy
value: 99.74257425742574
- type: euclidean_ap
value: 93.17993008259062
- type: euclidean_f1
value: 86.69396110542476
- type: euclidean_precision
value: 88.78406708595388
- type: euclidean_recall
value: 84.7
- type: manhattan_accuracy
value: 99.74257425742574
- type: manhattan_ap
value: 93.14413755550099
- type: manhattan_f1
value: 86.82483594144371
- type: manhattan_precision
value: 87.66564729867483
- type: manhattan_recall
value: 86
- type: max_accuracy
value: 99.74356435643564
- type: max_ap
value: 93.17993008259062
- type: max_f1
value: 86.82483594144371
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.525863806168566
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 32.68850574423839
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 49.71580650644033
- type: mrr
value: 50.50971903913081
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 29.152190498799484
- type: cos_sim_spearman
value: 29.686180371952727
- type: dot_pearson
value: 27.248664793816342
- type: dot_spearman
value: 28.37748983721745
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.20400000000000001
- type: map_at_10
value: 1.6209999999999998
- type: map_at_100
value: 9.690999999999999
- type: map_at_1000
value: 23.733
- type: map_at_3
value: 0.575
- type: map_at_5
value: 0.885
- type: mrr_at_1
value: 78
- type: mrr_at_10
value: 86.56700000000001
- type: mrr_at_100
value: 86.56700000000001
- type: mrr_at_1000
value: 86.56700000000001
- type: mrr_at_3
value: 85.667
- type: mrr_at_5
value: 86.56700000000001
- type: ndcg_at_1
value: 76
- type: ndcg_at_10
value: 71.326
- type: ndcg_at_100
value: 54.208999999999996
- type: ndcg_at_1000
value: 49.252
- type: ndcg_at_3
value: 74.235
- type: ndcg_at_5
value: 73.833
- type: precision_at_1
value: 78
- type: precision_at_10
value: 74.8
- type: precision_at_100
value: 55.50000000000001
- type: precision_at_1000
value: 21.836
- type: precision_at_3
value: 78
- type: precision_at_5
value: 78
- type: recall_at_1
value: 0.20400000000000001
- type: recall_at_10
value: 1.894
- type: recall_at_100
value: 13.245999999999999
- type: recall_at_1000
value: 46.373
- type: recall_at_3
value: 0.613
- type: recall_at_5
value: 0.991
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (sqi-eng)
type: mteb/tatoeba-bitext-mining
config: sqi-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.89999999999999
- type: f1
value: 94.69999999999999
- type: precision
value: 94.11666666666667
- type: recall
value: 95.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fry-eng)
type: mteb/tatoeba-bitext-mining
config: fry-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 68.20809248554913
- type: f1
value: 63.431048720066066
- type: precision
value: 61.69143958161298
- type: recall
value: 68.20809248554913
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kur-eng)
type: mteb/tatoeba-bitext-mining
config: kur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 71.21951219512195
- type: f1
value: 66.82926829268293
- type: precision
value: 65.1260162601626
- type: recall
value: 71.21951219512195
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tur-eng)
type: mteb/tatoeba-bitext-mining
config: tur-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.2
- type: f1
value: 96.26666666666667
- type: precision
value: 95.8
- type: recall
value: 97.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (deu-eng)
type: mteb/tatoeba-bitext-mining
config: deu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 99.3
- type: f1
value: 99.06666666666666
- type: precision
value: 98.95
- type: recall
value: 99.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nld-eng)
type: mteb/tatoeba-bitext-mining
config: nld-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.63333333333333
- type: precision
value: 96.26666666666668
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ron-eng)
type: mteb/tatoeba-bitext-mining
config: ron-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.86666666666666
- type: precision
value: 94.31666666666668
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ang-eng)
type: mteb/tatoeba-bitext-mining
config: ang-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 47.01492537313433
- type: f1
value: 40.178867566927266
- type: precision
value: 38.179295828549556
- type: recall
value: 47.01492537313433
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ido-eng)
type: mteb/tatoeba-bitext-mining
config: ido-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.5
- type: f1
value: 83.62537480063796
- type: precision
value: 82.44555555555554
- type: recall
value: 86.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jav-eng)
type: mteb/tatoeba-bitext-mining
config: jav-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.48780487804879
- type: f1
value: 75.45644599303138
- type: precision
value: 73.37398373983739
- type: recall
value: 80.48780487804879
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (isl-eng)
type: mteb/tatoeba-bitext-mining
config: isl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.95666666666666
- type: precision
value: 91.125
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slv-eng)
type: mteb/tatoeba-bitext-mining
config: slv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.73754556500607
- type: f1
value: 89.65168084244632
- type: precision
value: 88.73025516403402
- type: recall
value: 91.73754556500607
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cym-eng)
type: mteb/tatoeba-bitext-mining
config: cym-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81.04347826086956
- type: f1
value: 76.2128364389234
- type: precision
value: 74.2
- type: recall
value: 81.04347826086956
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kaz-eng)
type: mteb/tatoeba-bitext-mining
config: kaz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.65217391304348
- type: f1
value: 79.4376811594203
- type: precision
value: 77.65797101449274
- type: recall
value: 83.65217391304348
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (est-eng)
type: mteb/tatoeba-bitext-mining
config: est-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.5
- type: f1
value: 85.02690476190476
- type: precision
value: 83.96261904761904
- type: recall
value: 87.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (heb-eng)
type: mteb/tatoeba-bitext-mining
config: heb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89.3
- type: f1
value: 86.52333333333333
- type: precision
value: 85.22833333333332
- type: recall
value: 89.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gla-eng)
type: mteb/tatoeba-bitext-mining
config: gla-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.01809408926418
- type: f1
value: 59.00594446432805
- type: precision
value: 56.827215807915444
- type: recall
value: 65.01809408926418
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mar-eng)
type: mteb/tatoeba-bitext-mining
config: mar-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.2
- type: f1
value: 88.58
- type: precision
value: 87.33333333333334
- type: recall
value: 91.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lat-eng)
type: mteb/tatoeba-bitext-mining
config: lat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.199999999999996
- type: f1
value: 53.299166276284915
- type: precision
value: 51.3383908045977
- type: recall
value: 59.199999999999996
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bel-eng)
type: mteb/tatoeba-bitext-mining
config: bel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.2
- type: precision
value: 90.25
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pms-eng)
type: mteb/tatoeba-bitext-mining
config: pms-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 64.76190476190476
- type: f1
value: 59.867110667110666
- type: precision
value: 58.07390192653351
- type: recall
value: 64.76190476190476
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gle-eng)
type: mteb/tatoeba-bitext-mining
config: gle-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.2
- type: f1
value: 71.48147546897547
- type: precision
value: 69.65409090909091
- type: recall
value: 76.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pes-eng)
type: mteb/tatoeba-bitext-mining
config: pes-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.8
- type: f1
value: 92.14
- type: precision
value: 91.35833333333333
- type: recall
value: 93.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nob-eng)
type: mteb/tatoeba-bitext-mining
config: nob-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.89999999999999
- type: f1
value: 97.2
- type: precision
value: 96.85000000000001
- type: recall
value: 97.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bul-eng)
type: mteb/tatoeba-bitext-mining
config: bul-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 92.93333333333334
- type: precision
value: 92.13333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cbk-eng)
type: mteb/tatoeba-bitext-mining
config: cbk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.1
- type: f1
value: 69.14817460317461
- type: precision
value: 67.2515873015873
- type: recall
value: 74.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hun-eng)
type: mteb/tatoeba-bitext-mining
config: hun-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.19999999999999
- type: f1
value: 94.01333333333335
- type: precision
value: 93.46666666666667
- type: recall
value: 95.19999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uig-eng)
type: mteb/tatoeba-bitext-mining
config: uig-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.9
- type: f1
value: 72.07523809523809
- type: precision
value: 70.19777777777779
- type: recall
value: 76.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (rus-eng)
type: mteb/tatoeba-bitext-mining
config: rus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.1
- type: f1
value: 92.31666666666666
- type: precision
value: 91.43333333333332
- type: recall
value: 94.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (spa-eng)
type: mteb/tatoeba-bitext-mining
config: spa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.8
- type: f1
value: 97.1
- type: precision
value: 96.76666666666668
- type: recall
value: 97.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hye-eng)
type: mteb/tatoeba-bitext-mining
config: hye-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.85714285714286
- type: f1
value: 90.92093441150045
- type: precision
value: 90.00449236298293
- type: recall
value: 92.85714285714286
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tel-eng)
type: mteb/tatoeba-bitext-mining
config: tel-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.16239316239316
- type: f1
value: 91.33903133903132
- type: precision
value: 90.56267806267806
- type: recall
value: 93.16239316239316
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (afr-eng)
type: mteb/tatoeba-bitext-mining
config: afr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.4
- type: f1
value: 90.25666666666666
- type: precision
value: 89.25833333333334
- type: recall
value: 92.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mon-eng)
type: mteb/tatoeba-bitext-mining
config: mon-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.22727272727272
- type: f1
value: 87.53030303030303
- type: precision
value: 86.37121212121211
- type: recall
value: 90.22727272727272
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arz-eng)
type: mteb/tatoeba-bitext-mining
config: arz-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 79.03563941299791
- type: f1
value: 74.7349505840072
- type: precision
value: 72.9035639412998
- type: recall
value: 79.03563941299791
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hrv-eng)
type: mteb/tatoeba-bitext-mining
config: hrv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97
- type: f1
value: 96.15
- type: precision
value: 95.76666666666668
- type: recall
value: 97
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nov-eng)
type: mteb/tatoeba-bitext-mining
config: nov-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.26459143968872
- type: f1
value: 71.55642023346303
- type: precision
value: 69.7544932369835
- type: recall
value: 76.26459143968872
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (gsw-eng)
type: mteb/tatoeba-bitext-mining
config: gsw-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 58.119658119658126
- type: f1
value: 51.65242165242165
- type: precision
value: 49.41768108434775
- type: recall
value: 58.119658119658126
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nds-eng)
type: mteb/tatoeba-bitext-mining
config: nds-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 74.3
- type: f1
value: 69.52055555555555
- type: precision
value: 67.7574938949939
- type: recall
value: 74.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ukr-eng)
type: mteb/tatoeba-bitext-mining
config: ukr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.8
- type: f1
value: 93.31666666666666
- type: precision
value: 92.60000000000001
- type: recall
value: 94.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (uzb-eng)
type: mteb/tatoeba-bitext-mining
config: uzb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.63551401869158
- type: f1
value: 72.35202492211837
- type: precision
value: 70.60358255451713
- type: recall
value: 76.63551401869158
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lit-eng)
type: mteb/tatoeba-bitext-mining
config: lit-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.4
- type: f1
value: 88.4811111111111
- type: precision
value: 87.7452380952381
- type: recall
value: 90.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ina-eng)
type: mteb/tatoeba-bitext-mining
config: ina-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95
- type: f1
value: 93.60666666666667
- type: precision
value: 92.975
- type: recall
value: 95
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lfn-eng)
type: mteb/tatoeba-bitext-mining
config: lfn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 63.01595782872099
- type: precision
value: 61.596587301587306
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (zsm-eng)
type: mteb/tatoeba-bitext-mining
config: zsm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.7
- type: f1
value: 94.52999999999999
- type: precision
value: 94
- type: recall
value: 95.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ita-eng)
type: mteb/tatoeba-bitext-mining
config: ita-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.28999999999999
- type: precision
value: 92.675
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cmn-eng)
type: mteb/tatoeba-bitext-mining
config: cmn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.75
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (lvs-eng)
type: mteb/tatoeba-bitext-mining
config: lvs-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.9
- type: f1
value: 89.83
- type: precision
value: 88.92
- type: recall
value: 91.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (glg-eng)
type: mteb/tatoeba-bitext-mining
config: glg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.34222222222223
- type: precision
value: 92.75416666666668
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ceb-eng)
type: mteb/tatoeba-bitext-mining
config: ceb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 60.333333333333336
- type: f1
value: 55.31203703703703
- type: precision
value: 53.39971108326371
- type: recall
value: 60.333333333333336
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bre-eng)
type: mteb/tatoeba-bitext-mining
config: bre-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 12.9
- type: f1
value: 11.099861903031458
- type: precision
value: 10.589187932631877
- type: recall
value: 12.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ben-eng)
type: mteb/tatoeba-bitext-mining
config: ben-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 86.7
- type: f1
value: 83.0152380952381
- type: precision
value: 81.37833333333333
- type: recall
value: 86.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swg-eng)
type: mteb/tatoeba-bitext-mining
config: swg-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.39285714285714
- type: f1
value: 56.832482993197274
- type: precision
value: 54.56845238095237
- type: recall
value: 63.39285714285714
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (arq-eng)
type: mteb/tatoeba-bitext-mining
config: arq-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 48.73765093304062
- type: f1
value: 41.555736920720456
- type: precision
value: 39.06874531737319
- type: recall
value: 48.73765093304062
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kab-eng)
type: mteb/tatoeba-bitext-mining
config: kab-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 41.099999999999994
- type: f1
value: 36.540165945165946
- type: precision
value: 35.05175685425686
- type: recall
value: 41.099999999999994
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fra-eng)
type: mteb/tatoeba-bitext-mining
config: fra-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.42333333333333
- type: precision
value: 92.75833333333333
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (por-eng)
type: mteb/tatoeba-bitext-mining
config: por-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.89999999999999
- type: f1
value: 93.63333333333334
- type: precision
value: 93.01666666666665
- type: recall
value: 94.89999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tat-eng)
type: mteb/tatoeba-bitext-mining
config: tat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.9
- type: f1
value: 73.64833333333334
- type: precision
value: 71.90282106782105
- type: recall
value: 77.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (oci-eng)
type: mteb/tatoeba-bitext-mining
config: oci-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 59.4
- type: f1
value: 54.90521367521367
- type: precision
value: 53.432840025471606
- type: recall
value: 59.4
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pol-eng)
type: mteb/tatoeba-bitext-mining
config: pol-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.39999999999999
- type: f1
value: 96.6
- type: precision
value: 96.2
- type: recall
value: 97.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (war-eng)
type: mteb/tatoeba-bitext-mining
config: war-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 67.2
- type: f1
value: 62.25926129426129
- type: precision
value: 60.408376623376626
- type: recall
value: 67.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (aze-eng)
type: mteb/tatoeba-bitext-mining
config: aze-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.2
- type: f1
value: 87.60666666666667
- type: precision
value: 86.45277777777778
- type: recall
value: 90.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (vie-eng)
type: mteb/tatoeba-bitext-mining
config: vie-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 97.7
- type: f1
value: 97
- type: precision
value: 96.65
- type: recall
value: 97.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (nno-eng)
type: mteb/tatoeba-bitext-mining
config: nno-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.2
- type: f1
value: 91.39746031746031
- type: precision
value: 90.6125
- type: recall
value: 93.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cha-eng)
type: mteb/tatoeba-bitext-mining
config: cha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 32.11678832116788
- type: f1
value: 27.210415386260234
- type: precision
value: 26.20408990846947
- type: recall
value: 32.11678832116788
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mhr-eng)
type: mteb/tatoeba-bitext-mining
config: mhr-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.787319277832475
- type: precision
value: 6.3452094433344435
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dan-eng)
type: mteb/tatoeba-bitext-mining
config: dan-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.1
- type: f1
value: 95.08
- type: precision
value: 94.61666666666667
- type: recall
value: 96.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ell-eng)
type: mteb/tatoeba-bitext-mining
config: ell-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.3
- type: f1
value: 93.88333333333333
- type: precision
value: 93.18333333333332
- type: recall
value: 95.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (amh-eng)
type: mteb/tatoeba-bitext-mining
config: amh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.11904761904762
- type: f1
value: 80.69444444444444
- type: precision
value: 78.72023809523809
- type: recall
value: 85.11904761904762
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (pam-eng)
type: mteb/tatoeba-bitext-mining
config: pam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 11.1
- type: f1
value: 9.276381801735853
- type: precision
value: 8.798174603174601
- type: recall
value: 11.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hsb-eng)
type: mteb/tatoeba-bitext-mining
config: hsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 63.56107660455487
- type: f1
value: 58.70433569191332
- type: precision
value: 56.896926581464015
- type: recall
value: 63.56107660455487
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (srp-eng)
type: mteb/tatoeba-bitext-mining
config: srp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.69999999999999
- type: f1
value: 93.10000000000001
- type: precision
value: 92.35
- type: recall
value: 94.69999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (epo-eng)
type: mteb/tatoeba-bitext-mining
config: epo-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.8
- type: f1
value: 96.01222222222222
- type: precision
value: 95.67083333333332
- type: recall
value: 96.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kzj-eng)
type: mteb/tatoeba-bitext-mining
config: kzj-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 9.2
- type: f1
value: 7.911555250305249
- type: precision
value: 7.631246556216846
- type: recall
value: 9.2
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (awa-eng)
type: mteb/tatoeba-bitext-mining
config: awa-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.48917748917748
- type: f1
value: 72.27375798804371
- type: precision
value: 70.14430014430013
- type: recall
value: 77.48917748917748
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fao-eng)
type: mteb/tatoeba-bitext-mining
config: fao-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 77.09923664122137
- type: f1
value: 72.61541257724463
- type: precision
value: 70.8998380754106
- type: recall
value: 77.09923664122137
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mal-eng)
type: mteb/tatoeba-bitext-mining
config: mal-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 98.2532751091703
- type: f1
value: 97.69529354682193
- type: precision
value: 97.42843279961184
- type: recall
value: 98.2532751091703
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ile-eng)
type: mteb/tatoeba-bitext-mining
config: ile-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 82.8
- type: f1
value: 79.14672619047619
- type: precision
value: 77.59489247311828
- type: recall
value: 82.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (bos-eng)
type: mteb/tatoeba-bitext-mining
config: bos-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.35028248587571
- type: f1
value: 92.86252354048965
- type: precision
value: 92.2080979284369
- type: recall
value: 94.35028248587571
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cor-eng)
type: mteb/tatoeba-bitext-mining
config: cor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.5
- type: f1
value: 6.282429263935621
- type: precision
value: 5.783274240739785
- type: recall
value: 8.5
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (cat-eng)
type: mteb/tatoeba-bitext-mining
config: cat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 91.025
- type: precision
value: 90.30428571428571
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (eus-eng)
type: mteb/tatoeba-bitext-mining
config: eus-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 81
- type: f1
value: 77.8232380952381
- type: precision
value: 76.60194444444444
- type: recall
value: 81
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yue-eng)
type: mteb/tatoeba-bitext-mining
config: yue-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91
- type: f1
value: 88.70857142857142
- type: precision
value: 87.7
- type: recall
value: 91
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swe-eng)
type: mteb/tatoeba-bitext-mining
config: swe-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.3
- type: precision
value: 94.76666666666667
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dtp-eng)
type: mteb/tatoeba-bitext-mining
config: dtp-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 8.1
- type: f1
value: 7.001008218834307
- type: precision
value: 6.708329562594269
- type: recall
value: 8.1
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kat-eng)
type: mteb/tatoeba-bitext-mining
config: kat-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 87.1313672922252
- type: f1
value: 84.09070598748882
- type: precision
value: 82.79171454104429
- type: recall
value: 87.1313672922252
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (jpn-eng)
type: mteb/tatoeba-bitext-mining
config: jpn-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.28333333333333
- type: precision
value: 94.73333333333332
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (csb-eng)
type: mteb/tatoeba-bitext-mining
config: csb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 42.29249011857708
- type: f1
value: 36.981018542283365
- type: precision
value: 35.415877813576024
- type: recall
value: 42.29249011857708
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (xho-eng)
type: mteb/tatoeba-bitext-mining
config: xho-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 83.80281690140845
- type: f1
value: 80.86854460093896
- type: precision
value: 79.60093896713614
- type: recall
value: 83.80281690140845
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (orv-eng)
type: mteb/tatoeba-bitext-mining
config: orv-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 45.26946107784431
- type: f1
value: 39.80235464678088
- type: precision
value: 38.14342660001342
- type: recall
value: 45.26946107784431
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ind-eng)
type: mteb/tatoeba-bitext-mining
config: ind-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.3
- type: f1
value: 92.9
- type: precision
value: 92.26666666666668
- type: recall
value: 94.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tuk-eng)
type: mteb/tatoeba-bitext-mining
config: tuk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 37.93103448275862
- type: f1
value: 33.15192743764172
- type: precision
value: 31.57456528146183
- type: recall
value: 37.93103448275862
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (max-eng)
type: mteb/tatoeba-bitext-mining
config: max-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 69.01408450704226
- type: f1
value: 63.41549295774648
- type: precision
value: 61.342778895595806
- type: recall
value: 69.01408450704226
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (swh-eng)
type: mteb/tatoeba-bitext-mining
config: swh-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 76.66666666666667
- type: f1
value: 71.60705960705961
- type: precision
value: 69.60683760683762
- type: recall
value: 76.66666666666667
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (hin-eng)
type: mteb/tatoeba-bitext-mining
config: hin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 95.8
- type: f1
value: 94.48333333333333
- type: precision
value: 93.83333333333333
- type: recall
value: 95.8
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (dsb-eng)
type: mteb/tatoeba-bitext-mining
config: dsb-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 52.81837160751566
- type: f1
value: 48.435977731384824
- type: precision
value: 47.11291973845539
- type: recall
value: 52.81837160751566
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ber-eng)
type: mteb/tatoeba-bitext-mining
config: ber-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 44.9
- type: f1
value: 38.88962621607783
- type: precision
value: 36.95936507936508
- type: recall
value: 44.9
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tam-eng)
type: mteb/tatoeba-bitext-mining
config: tam-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 90.55374592833876
- type: f1
value: 88.22553125484721
- type: precision
value: 87.26927252985884
- type: recall
value: 90.55374592833876
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (slk-eng)
type: mteb/tatoeba-bitext-mining
config: slk-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 94.6
- type: f1
value: 93.13333333333333
- type: precision
value: 92.45333333333333
- type: recall
value: 94.6
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tgl-eng)
type: mteb/tatoeba-bitext-mining
config: tgl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 93.7
- type: f1
value: 91.99666666666667
- type: precision
value: 91.26666666666668
- type: recall
value: 93.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ast-eng)
type: mteb/tatoeba-bitext-mining
config: ast-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 85.03937007874016
- type: f1
value: 81.75853018372703
- type: precision
value: 80.34120734908137
- type: recall
value: 85.03937007874016
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (mkd-eng)
type: mteb/tatoeba-bitext-mining
config: mkd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88.3
- type: f1
value: 85.5
- type: precision
value: 84.25833333333334
- type: recall
value: 88.3
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (khm-eng)
type: mteb/tatoeba-bitext-mining
config: khm-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 65.51246537396122
- type: f1
value: 60.02297410192148
- type: precision
value: 58.133467727289236
- type: recall
value: 65.51246537396122
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ces-eng)
type: mteb/tatoeba-bitext-mining
config: ces-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96
- type: f1
value: 94.89
- type: precision
value: 94.39166666666667
- type: recall
value: 96
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tzl-eng)
type: mteb/tatoeba-bitext-mining
config: tzl-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 57.692307692307686
- type: f1
value: 53.162393162393165
- type: precision
value: 51.70673076923077
- type: recall
value: 57.692307692307686
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (urd-eng)
type: mteb/tatoeba-bitext-mining
config: urd-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 91.60000000000001
- type: f1
value: 89.21190476190475
- type: precision
value: 88.08666666666667
- type: recall
value: 91.60000000000001
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (ara-eng)
type: mteb/tatoeba-bitext-mining
config: ara-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 88
- type: f1
value: 85.47
- type: precision
value: 84.43266233766234
- type: recall
value: 88
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (kor-eng)
type: mteb/tatoeba-bitext-mining
config: kor-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 92.7
- type: f1
value: 90.64999999999999
- type: precision
value: 89.68333333333332
- type: recall
value: 92.7
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (yid-eng)
type: mteb/tatoeba-bitext-mining
config: yid-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 80.30660377358491
- type: f1
value: 76.33044137466307
- type: precision
value: 74.78970125786164
- type: recall
value: 80.30660377358491
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (fin-eng)
type: mteb/tatoeba-bitext-mining
config: fin-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.39999999999999
- type: f1
value: 95.44
- type: precision
value: 94.99166666666666
- type: recall
value: 96.39999999999999
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (tha-eng)
type: mteb/tatoeba-bitext-mining
config: tha-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 96.53284671532847
- type: f1
value: 95.37712895377129
- type: precision
value: 94.7992700729927
- type: recall
value: 96.53284671532847
- task:
type: BitextMining
dataset:
name: MTEB Tatoeba (wuu-eng)
type: mteb/tatoeba-bitext-mining
config: wuu-eng
split: test
revision: 9080400076fbadbb4c4dcb136ff4eddc40b42553
metrics:
- type: accuracy
value: 89
- type: f1
value: 86.23190476190476
- type: precision
value: 85.035
- type: recall
value: 89
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.585
- type: map_at_10
value: 9.012
- type: map_at_100
value: 14.027000000000001
- type: map_at_1000
value: 15.565000000000001
- type: map_at_3
value: 5.032
- type: map_at_5
value: 6.657
- type: mrr_at_1
value: 28.571
- type: mrr_at_10
value: 45.377
- type: mrr_at_100
value: 46.119
- type: mrr_at_1000
value: 46.127
- type: mrr_at_3
value: 41.156
- type: mrr_at_5
value: 42.585
- type: ndcg_at_1
value: 27.551
- type: ndcg_at_10
value: 23.395
- type: ndcg_at_100
value: 33.342
- type: ndcg_at_1000
value: 45.523
- type: ndcg_at_3
value: 25.158
- type: ndcg_at_5
value: 23.427
- type: precision_at_1
value: 28.571
- type: precision_at_10
value: 21.429000000000002
- type: precision_at_100
value: 6.714
- type: precision_at_1000
value: 1.473
- type: precision_at_3
value: 27.211000000000002
- type: precision_at_5
value: 24.490000000000002
- type: recall_at_1
value: 2.585
- type: recall_at_10
value: 15.418999999999999
- type: recall_at_100
value: 42.485
- type: recall_at_1000
value: 79.536
- type: recall_at_3
value: 6.239999999999999
- type: recall_at_5
value: 8.996
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.3234
- type: ap
value: 14.361688653847423
- type: f1
value: 54.819068624319044
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.97792869269949
- type: f1
value: 62.28965628513728
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 38.90540145385218
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.53513739047506
- type: cos_sim_ap
value: 75.27741586677557
- type: cos_sim_f1
value: 69.18792902473774
- type: cos_sim_precision
value: 67.94708725515136
- type: cos_sim_recall
value: 70.47493403693932
- type: dot_accuracy
value: 84.7052512368123
- type: dot_ap
value: 69.36075482849378
- type: dot_f1
value: 64.44688376631296
- type: dot_precision
value: 59.92288500793831
- type: dot_recall
value: 69.70976253298153
- type: euclidean_accuracy
value: 86.60666388508076
- type: euclidean_ap
value: 75.47512772621097
- type: euclidean_f1
value: 69.413872536473
- type: euclidean_precision
value: 67.39562624254472
- type: euclidean_recall
value: 71.55672823218997
- type: manhattan_accuracy
value: 86.52917684925792
- type: manhattan_ap
value: 75.34000110496703
- type: manhattan_f1
value: 69.28489190226429
- type: manhattan_precision
value: 67.24608889992551
- type: manhattan_recall
value: 71.45118733509234
- type: max_accuracy
value: 86.60666388508076
- type: max_ap
value: 75.47512772621097
- type: max_f1
value: 69.413872536473
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.01695967710637
- type: cos_sim_ap
value: 85.8298270742901
- type: cos_sim_f1
value: 78.46988128389272
- type: cos_sim_precision
value: 74.86017897091722
- type: cos_sim_recall
value: 82.44533415460425
- type: dot_accuracy
value: 88.19420188613343
- type: dot_ap
value: 83.82679165901324
- type: dot_f1
value: 76.55833777304208
- type: dot_precision
value: 75.6884875846501
- type: dot_recall
value: 77.44841392054204
- type: euclidean_accuracy
value: 89.03054294252338
- type: euclidean_ap
value: 85.89089555185325
- type: euclidean_f1
value: 78.62997658079624
- type: euclidean_precision
value: 74.92329149232914
- type: euclidean_recall
value: 82.72251308900523
- type: manhattan_accuracy
value: 89.0266620095471
- type: manhattan_ap
value: 85.86458997929147
- type: manhattan_f1
value: 78.50685331000291
- type: manhattan_precision
value: 74.5499861534201
- type: manhattan_recall
value: 82.90729904527257
- type: max_accuracy
value: 89.03054294252338
- type: max_ap
value: 85.89089555185325
- type: max_f1
value: 78.62997658079624
---
# soichisumi/multilingual-e5-large-Q8_0-GGUF
This model was converted to GGUF format from [`intfloat/multilingual-e5-large`](https://huggingface.co/intfloat/multilingual-e5-large) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/intfloat/multilingual-e5-large) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo soichisumi/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo soichisumi/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo soichisumi/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo soichisumi/multilingual-e5-large-Q8_0-GGUF --hf-file multilingual-e5-large-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-08-25T09:37:15 | 2024-08-25T10:51:01 | 37 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Phi-3.1-mini-128k-instruct - GGUF
- Model creator: https://huggingface.co/tuong-nguyen-prd/
- Original model: https://huggingface.co/tuong-nguyen-prd/Phi-3.1-mini-128k-instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Phi-3.1-mini-128k-instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q2_K.gguf) | Q2_K | 1.32GB |
| [Phi-3.1-mini-128k-instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.IQ3_XS.gguf) | IQ3_XS | 1.51GB |
| [Phi-3.1-mini-128k-instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.IQ3_S.gguf) | IQ3_S | 1.57GB |
| [Phi-3.1-mini-128k-instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q3_K_S.gguf) | Q3_K_S | 1.57GB |
| [Phi-3.1-mini-128k-instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.IQ3_M.gguf) | IQ3_M | 1.73GB |
| [Phi-3.1-mini-128k-instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q3_K.gguf) | Q3_K | 1.82GB |
| [Phi-3.1-mini-128k-instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q3_K_M.gguf) | Q3_K_M | 1.82GB |
| [Phi-3.1-mini-128k-instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q3_K_L.gguf) | Q3_K_L | 1.94GB |
| [Phi-3.1-mini-128k-instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.IQ4_XS.gguf) | IQ4_XS | 1.93GB |
| [Phi-3.1-mini-128k-instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q4_0.gguf) | Q4_0 | 2.03GB |
| [Phi-3.1-mini-128k-instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.IQ4_NL.gguf) | IQ4_NL | 2.04GB |
| [Phi-3.1-mini-128k-instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q4_K_S.gguf) | Q4_K_S | 2.04GB |
| [Phi-3.1-mini-128k-instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q4_K.gguf) | Q4_K | 2.23GB |
| [Phi-3.1-mini-128k-instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q4_K_M.gguf) | Q4_K_M | 2.23GB |
| [Phi-3.1-mini-128k-instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q4_1.gguf) | Q4_1 | 2.24GB |
| [Phi-3.1-mini-128k-instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q5_0.gguf) | Q5_0 | 2.46GB |
| [Phi-3.1-mini-128k-instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q5_K_S.gguf) | Q5_K_S | 2.46GB |
| [Phi-3.1-mini-128k-instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q5_K.gguf) | Q5_K | 2.62GB |
| [Phi-3.1-mini-128k-instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q5_K_M.gguf) | Q5_K_M | 2.62GB |
| [Phi-3.1-mini-128k-instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q5_1.gguf) | Q5_1 | 2.68GB |
| [Phi-3.1-mini-128k-instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q6_K.gguf) | Q6_K | 2.92GB |
| [Phi-3.1-mini-128k-instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/tuong-nguyen-prd_-_Phi-3.1-mini-128k-instruct-gguf/blob/main/Phi-3.1-mini-128k-instruct.Q8_0.gguf) | Q8_0 | 3.78GB |
Original model description:
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"SUMMARIZATION"
] | [
"MEDQA"
] |
tner/roberta-large-bc5cdr | tner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/bc5cdr",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-09T23:32:35 | 2022-09-26T14:13:58 | 36 | 2 | ---
datasets:
- tner/bc5cdr
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: Jacob Collier is a Grammy awarded artist from England.
example_title: NER Example 1
model-index:
- name: tner/roberta-large-bc5cdr
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: tner/bc5cdr
type: tner/bc5cdr
args: tner/bc5cdr
metrics:
- type: f1
value: 0.8840696387239609
name: F1
- type: precision
value: 0.8728266269249876
name: Precision
- type: recall
value: 0.8956060760526048
name: Recall
- type: f1_macro
value: 0.8797360472482783
name: F1 (macro)
- type: precision_macro
value: 0.8684274142690976
name: Precision (macro)
- type: recall_macro
value: 0.8913672531528037
name: Recall (macro)
- type: f1_entity_span
value: 0.886283586595552
name: F1 (entity span)
- type: precision_entity_span
value: 0.8750124192747144
name: Precision (entity span)
- type: recall_entity_span
value: 0.8978489142624121
name: Recall (entity span)
---
# tner/roberta-large-bc5cdr
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the
[tner/bc5cdr](https://huggingface.co/datasets/tner/bc5cdr) dataset.
Model fine-tuning is done via [T-NER](https://github.com/asahi417/tner)'s hyper-parameter search (see the repository
for more detail). It achieves the following results on the test set:
- F1 (micro): 0.8840696387239609
- Precision (micro): 0.8728266269249876
- Recall (micro): 0.8956060760526048
- F1 (macro): 0.8797360472482783
- Precision (macro): 0.8684274142690976
- Recall (macro): 0.8913672531528037
The per-entity breakdown of the F1 score on the test set are below:
- chemical: 0.9256943167187788
- disease: 0.8337777777777777
For F1 scores, the confidence interval is obtained by bootstrap as below:
- F1 (micro):
- 90%: [0.878869501707946, 0.8890795634554179]
- 95%: [0.8776790106527211, 0.8897422640465147]
- F1 (macro):
- 90%: [0.878869501707946, 0.8890795634554179]
- 95%: [0.8776790106527211, 0.8897422640465147]
Full evaluation can be found at [metric file of NER](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/eval/metric.json)
and [metric file of entity span](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/eval/metric_span.json).
### Usage
This model can be used through the [tner library](https://github.com/asahi417/tner). Install the library via pip
```shell
pip install tner
```
and activate model as below.
```python
from tner import TransformersNER
model = TransformersNER("tner/roberta-large-bc5cdr")
model.predict(["Jacob Collier is a Grammy awarded English artist from London"])
```
It can be used via transformers library but it is not recommended as CRF layer is not supported at the moment.
### Training hyperparameters
The following hyperparameters were used during training:
- dataset: ['tner/bc5cdr']
- dataset_split: train
- dataset_name: None
- local_dataset: None
- model: roberta-large
- crf: True
- max_length: 128
- epoch: 15
- batch_size: 64
- lr: 1e-05
- random_seed: 42
- gradient_accumulation_steps: 1
- weight_decay: None
- lr_warmup_step_ratio: 0.1
- max_grad_norm: 10.0
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/tner/roberta-large-bc5cdr/raw/main/trainer_config.json).
### Reference
If you use any resource from T-NER, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-camacho-collados-2021-ner,
title = "{T}-{NER}: An All-Round Python Library for Transformer-based Named Entity Recognition",
author = "Ushio, Asahi and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-demos.7",
doi = "10.18653/v1/2021.eacl-demos.7",
pages = "53--62",
abstract = "Language model (LM) pretraining has led to consistent improvements in many NLP downstream tasks, including named entity recognition (NER). In this paper, we present T-NER (Transformer-based Named Entity Recognition), a Python library for NER LM finetuning. In addition to its practical utility, T-NER facilitates the study and investigation of the cross-domain and cross-lingual generalization ability of LMs finetuned on NER. Our library also provides a web app where users can get model predictions interactively for arbitrary text, which facilitates qualitative model evaluation for non-expert programmers. We show the potential of the library by compiling nine public NER datasets into a unified format and evaluating the cross-domain and cross- lingual performance across the datasets. The results from our initial experiments show that in-domain performance is generally competitive across datasets. However, cross-domain generalization is challenging even with a large pretrained LM, which has nevertheless capacity to learn domain-specific features if fine- tuned on a combined dataset. To facilitate future research, we also release all our LM checkpoints via the Hugging Face model hub.",
}
```
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"BC5CDR"
] |
pszemraj/long-t5-tglobal-base-16384-booksci-summary-v1 | pszemraj | summarization | [
"transformers",
"pytorch",
"onnx",
"safetensors",
"longt5",
"text2text-generation",
"generated_from_trainer",
"lay summary",
"narrative",
"biomedical",
"long document summary",
"summarization",
"en",
"dataset:pszemraj/scientific_lay_summarisation-elife-norm",
"base_model:pszemraj/long-t5-tglobal-base-16384-book-summary",
"base_model:quantized:pszemraj/long-t5-tglobal-base-16384-book-summary",
"license:bsd-3-clause",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-08T23:38:17 | 2023-10-05T06:56:00 | 36 | 2 | ---
base_model: pszemraj/long-t5-tglobal-base-16384-book-summary
datasets:
- pszemraj/scientific_lay_summarisation-elife-norm
language:
- en
library_name: transformers
license:
- bsd-3-clause
- apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- generated_from_trainer
- lay summary
- narrative
- biomedical
- long document summary
widget:
- text: large earthquakes along a given fault segment do not occur at random intervals
because it takes time to accumulate the strain energy for the rupture. The rates
at which tectonic plates move and accumulate strain at their boundaries are approximately
uniform. Therefore, in first approximation, one may expect that large ruptures
of the same fault segment will occur at approximately constant time intervals.
If subsequent main shocks have different amounts of slip across the fault, then
the recurrence time may vary, and the basic idea of periodic mainshocks must be
modified. For great plate boundary ruptures the length and slip often vary by
a factor of 2. Along the southern segment of the San Andreas fault the recurrence
interval is 145 years with variations of several decades. The smaller the standard
deviation of the average recurrence interval, the more specific could be the long
term prediction of a future mainshock.
example_title: earthquakes
- text: ' A typical feed-forward neural field algorithm. Spatiotemporal coordinates
are fed into a neural network that predicts values in the reconstructed domain.
Then, this domain is mapped to the sensor domain where sensor measurements are
available as supervision. Class and Section Problems Addressed Generalization
(Section 2) Inverse problems, ill-posed problems, editability; symmetries. Hybrid
Representations (Section 3) Computation & memory efficiency, representation capacity,
editability: Forward Maps (Section 4) Inverse problems Network Architecture (Section
5) Spectral bias, integration & derivatives. Manipulating Neural Fields (Section
6) Edit ability, constraints, regularization. Table 2: The five classes of techniques
in the neural field toolbox each addresses problems that arise in learning, inference,
and control. (Section 3). We can supervise reconstruction via differentiable forward
maps that transform Or project our domain (e.g, 3D reconstruction via 2D images;
Section 4) With appropriate network architecture choices, we can overcome neural
network spectral biases (blurriness) and efficiently compute derivatives and integrals
(Section 5). Finally, we can manipulate neural fields to add constraints and regularizations,
and to achieve editable representations (Section 6). Collectively, these classes
constitute a ''toolbox'' of techniques to help solve problems with neural fields
There are three components in a conditional neural field: (1) An encoder or inference
function € that outputs the conditioning latent variable 2 given an observation
0 E(0) =2. 2 is typically a low-dimensional vector, and is often referred to aS
a latent code Or feature code_ (2) A mapping function 4 between Z and neural field
parameters O: Y(z) = O; (3) The neural field itself $. The encoder € finds the
most probable z given the observations O: argmaxz P(2/0). The decoder maximizes
the inverse conditional probability to find the most probable 0 given Z: arg-
max P(Olz). We discuss different encoding schemes with different optimality guarantees
(Section 2.1.1), both global and local conditioning (Section 2.1.2), and different
mapping functions Y (Section 2.1.3) 2. Generalization Suppose we wish to estimate
a plausible 3D surface shape given a partial or noisy point cloud. We need a suitable
prior over the sur- face in its reconstruction domain to generalize to the partial
observations. A neural network expresses a prior via the function space of its
architecture and parameters 0, and generalization is influenced by the inductive
bias of this function space (Section 5).'
example_title: scientific paper
- text: 'Is a else or outside the cob and tree written being of early client rope
and you have is for good reasons. On to the ocean in Orange for time. By''s the
aggregate we can bed it yet. Why this please pick up on a sort is do and also
M Getoi''s nerocos and do rain become you to let so is his brother is made in
use and Mjulia''s''s the lay major is aging Masastup coin present sea only of
Oosii rooms set to you We do er do we easy this private oliiishs lonthen might
be okay. Good afternoon everybody. Welcome to this lecture of Computational Statistics.
As you can see, I''m not socially my name is Michael Zelinger. I''m one of the
task for this class and you might have already seen me in the first lecture where
I made a quick appearance. I''m also going to give the tortillas in the last third
of this course. So to give you a little bit about me, I''m a old student here
with better Bulman and my research centres on casual inference applied to biomedical
disasters, so that could be genomics or that could be hospital data. If any of
you is interested in writing a bachelor thesis, a semester paper may be mastathesis
about this topic feel for reach out to me. you have my name on models and my email
address you can find in the directory I''d Be very happy to talk about it. you
do not need to be sure about it, we can just have a chat. So with that said, let''s
get on with the lecture. There''s an exciting topic today I''m going to start
by sharing some slides with you and later on during the lecture we''ll move to
the paper. So bear with me for a few seconds. Well, the projector is starting
up. Okay, so let''s get started. Today''s topic is a very important one. It''s
about a technique which really forms one of the fundamentals of data science,
machine learning, and any sort of modern statistics. It''s called cross validation.
I know you really want to understand this topic I Want you to understand this
and frankly, nobody''s gonna leave Professor Mineshousen''s class without understanding
cross validation. So to set the stage for this, I Want to introduce you to the
validation problem in computational statistics. So the problem is the following:
You trained a model on available data. You fitted your model, but you know the
training data you got could always have been different and some data from the
environment. Maybe it''s a random process. You do not really know what it is,
but you know that somebody else who gets a different batch of data from the same
environment they would get slightly different training data and you do not care
that your method performs as well. On this training data. you want to to perform
well on other data that you have not seen other data from the same environment.
So in other words, the validation problem is you want to quantify the performance
of your model on data that you have not seen. So how is this even possible? How
could you possibly measure the performance on data that you do not know The solution
to? This is the following realization is that given that you have a bunch of data,
you were in charge. You get to control how much that your model sees. It works
in the following way: You can hide data firms model. Let''s say you have a training
data set which is a bunch of doubtless so X eyes are the features those are typically
hide and national vector. It''s got more than one dimension for sure. And the
why why eyes. Those are the labels for supervised learning. As you''ve seen before,
it''s the same set up as we have in regression. And so you have this training
data and now you choose that you only use some of those data to fit your model.
You''re not going to use everything, you only use some of it the other part you
hide from your model. And then you can use this hidden data to do validation from
the point of you of your model. This hidden data is complete by unseen. In other
words, we solve our problem of validation.'
example_title: transcribed audio - lecture
- text: 'Transformer-based models have shown to be very useful for many NLP tasks.
However, a major limitation of transformers-based models is its O(n^2)O(n 2) time
& memory complexity (where nn is sequence length). Hence, it''s computationally
very expensive to apply transformer-based models on long sequences n > 512n>512.
Several recent papers, e.g. Longformer, Performer, Reformer, Clustered attention
try to remedy this problem by approximating the full attention matrix. You can
checkout 🤗''s recent blog post in case you are unfamiliar with these models.
BigBird (introduced in paper) is one of such recent models to address this issue.
BigBird relies on block sparse attention instead of normal attention (i.e. BERT''s
attention) and can handle sequences up to a length of 4096 at a much lower computational
cost compared to BERT. It has achieved SOTA on various tasks involving very long
sequences such as long documents summarization, question-answering with long contexts.
BigBird RoBERTa-like model is now available in 🤗Transformers. The goal of this
post is to give the reader an in-depth understanding of big bird implementation
& ease one''s life in using BigBird with 🤗Transformers. But, before going into
more depth, it is important to remember that the BigBird''s attention is an approximation
of BERT''s full attention and therefore does not strive to be better than BERT''s
full attention, but rather to be more efficient. It simply allows to apply transformer-based
models to much longer sequences since BERT''s quadratic memory requirement quickly
becomes unbearable. Simply put, if we would have ∞ compute & ∞ time, BERT''s attention
would be preferred over block sparse attention (which we are going to discuss
in this post).
If you wonder why we need more compute when working with longer sequences, this
blog post is just right for you!
Some of the main questions one might have when working with standard BERT-like
attention include:
Do all tokens really have to attend to all other tokens? Why not compute attention
only over important tokens? How to decide what tokens are important? How to attend
to just a few tokens in a very efficient way? In this blog post, we will try to
answer those questions.
What tokens should be attended to? We will give a practical example of how attention
works by considering the sentence ''BigBird is now available in HuggingFace for
extractive question answering''. In BERT-like attention, every word would simply
attend to all other tokens.
Let''s think about a sensible choice of key tokens that a queried token actually
only should attend to by writing some pseudo-code. Will will assume that the token
available is queried and build a sensible list of key tokens to attend to.
>>> # let''s consider following sentence as an example >>> example = [''BigBird'',
''is'', ''now'', ''available'', ''in'', ''HuggingFace'', ''for'', ''extractive'',
''question'', ''answering'']
>>> # further let''s assume, we''re trying to understand the representation of
''available'' i.e. >>> query_token = ''available'' >>> # We will initialize an
empty `set` and fill up the tokens of our interest as we proceed in this section.
>>> key_tokens = [] # => currently ''available'' token doesn''t have anything
to attend Nearby tokens should be important because, in a sentence (sequence of
words), the current word is highly dependent on neighboring past & future tokens.
This intuition is the idea behind the concept of sliding attention.'
example_title: bigbird blog intro
- text: 'To be fair, you have to have a very high IQ to understand Rick and Morty.
The humour is extremely subtle, and without a solid grasp of theoretical physics
most of the jokes will go over a typical viewer''s head. There''s also Rick''s
nihilistic outlook, which is deftly woven into his characterisation- his personal
philosophy draws heavily from Narodnaya Volya literature, for instance. The fans
understand this stuff; they have the intellectual capacity to truly appreciate
the depths of these jokes, to realise that they''re not just funny- they say something
deep about LIFE. As a consequence people who dislike Rick & Morty truly ARE idiots-
of course they wouldn''t appreciate, for instance, the humour in Rick''s existential
catchphrase ''Wubba Lubba Dub Dub,'' which itself is a cryptic reference to Turgenev''s
Russian epic Fathers and Sons. I''m smirking right now just imagining one of those
addlepated simpletons scratching their heads in confusion as Dan Harmon''s genius
wit unfolds itself on their television screens. What fools.. how I pity them.
😂
And yes, by the way, i DO have a Rick & Morty tattoo. And no, you cannot see it.
It''s for the ladies'' eyes only- and even then they have to demonstrate that
they''re within 5 IQ points of my own (preferably lower) beforehand. Nothin personnel
kid 😎'
example_title: Richard & Mortimer
- text: The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey
building, and the tallest structure in Paris. Its base is square, measuring 125
metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed
the Washington Monument to become the tallest man-made structure in the world,
a title it held for 41 years until the Chrysler Building in New York City was
finished in 1930. It was the first structure to reach a height of 300 metres.
Due to the addition of a broadcasting aerial at the top of the tower in 1957,
it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters,
the Eiffel Tower is the second tallest free-standing structure in France after
the Millau Viaduct.
example_title: eiffel
parameters:
max_length: 64
min_length: 8
no_repeat_ngram_size: 3
early_stopping: true
repetition_penalty: 3.5
encoder_no_repeat_ngram_size: 4
length_penalty: 0.4
num_beams: 4
model-index:
- name: pszemraj/long-t5-tglobal-base-16384-booksci-summary-v1
results:
- task:
type: summarization
name: Summarization
dataset:
name: kmfoda/booksum
type: kmfoda/booksum
config: kmfoda--booksum
split: test
metrics:
- type: rouge
value: 36.7976
name: ROUGE-1
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzk0ZDQ3MDI1MmRhZDZhOTkzMGY3MWZmNGM2MzMwYTI1Y2MyZDQ0ZWZiZTRkZjI2YzJhMTRkOWE2MmM0MzEzNyIsInZlcnNpb24iOjF9.U_h6vUEz3UYWsk90uBckLpUJqSE9L_XlQiwcBdpDLE_lBPTZZ_V0hoFNrR3c2kUKBLZPPrRWsqCqca_uzhTgDw
- type: rouge
value: 6.1002
name: ROUGE-2
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWMwNWVjMDMwYTNlNmQ5Yjc3NzQ0Y2MyNjg2NzA3ZTYyN2NkMmUxMTU3YjUwYzZjNmJlZWQwZTc5ODk0ZjhmOSIsInZlcnNpb24iOjF9.efVyAzcR7ay-Yy3jCzgaF7FnRXdjCLxxEz6crKVjsqwdW7B3eBBdFD5AXRItMk5_yGdrZTSjEFjpgb15Qt3yDw
- type: rouge
value: 16.5037
name: ROUGE-L
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2UwYTU3NzAzNDBmNjRhMWNiOGU2NDE5ZGIwMmY3MzI1NjczZDUxYzVjM2VlNDI5OTgzYjI5NTk3YTgwYjNjZSIsInZlcnNpb24iOjF9.yTuu6tK7MLOf2y_RAG7RAcOrm7uX5OYnYYJ0Nts7ZocojFM45FA4p_DLGwrIKtw8gRWQOj5Y8aUgvRc3ZvPnAA
- type: rouge
value: 33.6126
name: ROUGE-LSUM
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGMyMmUwNTkzZTBjYzg2NTFkY2M5ODkzOTIwOGYzNWEwNWM4ODFiODg5MGM2ZjBmODNkNGJhZTI3ZjY2YzRiOCIsInZlcnNpb24iOjF9.F4clVCMlK2AvTrsBX9LGmbMoI618Iq_gkhyRyNo0s2gJG4y73nZC6s_TH7zolpIDfo-bcn46ALFX7LGmZALrCw
- type: loss
value: 2.3926444053649902
name: loss
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWU3MmJmYmU0ODBhMDJiYWMzN2M3ZTdkM2YzMzI5YzVkM2YwNTA3YjQ5MDBmYmZjOWQ1ZDMwYjUyYTI0ZDQ3YyIsInZlcnNpb24iOjF9.BvumB3q-msXpO1fYkrsy7x9q1ai2mNkRpc18RqfKdUc1pipPnmBOfQYemc9GGZqT8yVAigF2sjWIsZDh4FcICQ
- type: gen_len
value: 279.9161
name: gen_len
verified: true
verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWM0NDkzZTNiZjhiZjMzMzM5NWUwOGRlYTI4ZjkzNWVjMDNlZTVjMTE2NzdjMTE4ZDJjNDVmZjQxOWZjMDk2MCIsInZlcnNpb24iOjF9.kHWjbQmcBTWxHkhibyIy4S_5Ze759i2nuR8MEB6LIYAQDy0aQgpaOH32Ux0juqENHr390AcxSa04FN8EIQJkCw
---
# long-t5-tglobal-base-16384-booksci-summary: v1
<a href="https://colab.research.google.com/gist/pszemraj/ee06b2b3bfcd7d29e588697a90f7a776/long-t5-tglobal-base-16384-booksci-summary-v1-example-with-textsum.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
An experiment investigating transfer learning capabilities by fine-tuning models on different datasets starting from the `booksum` checkpoint.
## Model Details
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-base-16384-book-summary](https://huggingface.co/pszemraj/long-t5-tglobal-base-16384-book-summary) on the `pszemraj/scientific_lay_summarisation-elife-norm` dataset for two epochs.
## Usage
It's recommended to use this model with [beam search decoding](https://huggingface.co/docs/transformers/generation_strategies#beamsearch-decoding). If interested, you can also use the `textsum` util repo to have most of this abstracted out for you:
```bash
pip install -U textsum
```
```python
from textsum.summarize import Summarizer
model_name = "pszemraj/long-t5-tglobal-base-16384-booksci-summary-v1"
summarizer = Summarizer(model_name) # GPU auto-detected
text = "put the text you don't want to read here"
summary = summarizer.summarize_string(text)
print(summary)
```
## Intended uses & limitations
- This is an initial experiment
- Domain generalization abilities at time of writing are unknown
## Training procedure
> Note: this model was trained at a lower LR & not till "absolute convergence" with the intention of retaining some of the properties learned from the initial fine-tuning on `booksum`
### Results
It achieves the following results on the evaluation set:
- Loss: 2.3994
- Rouge1: 34.2428
- Rouge2: 4.3644
- Rougel: 12.5332
- Rougelsum: 30.6965
- Gen Len: 294.0249
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:--------:|
| 2.7492 | 0.99 | 67 | 2.4272 | 34.6436 | 4.4536 | 12.4985 | 30.916 | 300.7635 |
| 2.6689 | 1.97 | 134 | 2.3994 | 34.2428 | 4.3644 | 12.5332 | 30.6965 | 294.0249 |
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"BEAR"
] |
ikim-uk-essen/geberta-large | ikim-uk-essen | fill-mask | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"arxiv:2310.07321",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-20T12:43:35 | 2025-01-29T16:26:19 | 36 | 2 | ---
license: mit
---
# GeBERTa
<!-- Provide a quick summary of what the model is/does. -->
GeBERTa is a set of German DeBERTa models developed in a joint effort between the University of Florida, NVIDIA, and IKIM.
The models range in size from 122M to 750M parameters.
## Model details
The models follow the architecture of DeBERTa-v2 and make use of sentence piece tokenizers. The base and large models use a 50k token vocabulary,
while the large model uses a 128k token vocabulary. All models were trained with a batch size of 2k for a maximum of 1 million steps
and have a maximum sequence length of 512 tokens.
## Dataset
The pre-training dataset consists of documents from different domains:
| Domain | Dataset | Data Size | #Docs | #Tokens |
| -------- | ----------- | --------- | ------ | ------- |
| Formal | Wikipedia | 9GB | 2,665,357 | 1.9B |
| Formal | News | 28GB | 12,305,326 | 6.1B |
| Formal | GC4 | 90GB | 31,669,772 | 19.4B |
| Informal | Reddit 2019-2023 (GER) | 5.8GB | 15,036,592 | 1.3B |
| Informal | Holiday Reviews | 2GB | 4,876,405 | 428M |
| Legal | OpenLegalData: German cases and laws | 5.4GB | 308,228 | 1B |
| Medical | Smaller public datasets | 253MB | 179,776 | 50M |
| Medical | CC medical texts | 3.6GB | 2,000,000 | 682M |
| Medical | Medicine Dissertations | 1.4GB | 14,496 | 295M |
| Medical | Pubmed abstracts (translated) | 8.5GB | 21,044,382 | 1.7B |
| Medical | MIMIC III (translated) | 2.6GB | 24,221,834 | 695M |
| Medical | PMC-Patients-ReCDS (translated) | 2.1GB | 1,743,344 | 414M |
| Literature | German Fiction | 1.1GB | 3,219 | 243M |
| Literature | English books (translated) | 7.1GB | 11,038 | 1.6B |
| - | Total | 167GB | 116,079,769 | 35.8B |
## Benchmark
In a comprehensive benchmark, we evaluated existing German models and our own. The benchmark included a variety of task types, such as question answering,
classification, and named entity recognition (NER). In addition, we introduced a new task focused on hate speech detection using two existing datasets.
When the datasets provided training, development, and test sets, we used them accordingly.
We randomly split the data into 80% for training, 10% for validation, and 10% for test in cases where such sets were not available.
The following table presents the F1 scores:
| Model | [GE14](https://huggingface.co/datasets/germeval_14) | [GQuAD](https://huggingface.co/datasets/deepset/germanquad) | [GE18](https://huggingface.co/datasets/philschmid/germeval18) | TS | [GGP](https://github.com/JULIELab/GGPOnc) | GRAS<sup>1</sup> | [JS](https://github.com/JULIELab/jsyncc) | [DROC](https://gitlab2.informatik.uni-wuerzburg.de/kallimachos/DROC-Release) | Avg |
|:---------------------:|:--------:|:----------:|:--------:|:--------:|:-------:|:------:|:--------:|:------:|:------:|
| [GBERT](https://huggingface.co/deepset/gbert-large)<sub>large</sub> | 88.48±0.23 | 81.51±0.84 | 54.37±1.65 | 73.60±0.61 | **79.17**±0.14 | 69.28±0.80 | 76.32±4.42 | 90.29±0.15 | 76.63±0.63 |
| [GELECTRA](https://huggingface.co/deepset/gelectra-large)<sub>large</sub> | 88.39±0.13 | 80.51±0.41 | 55.41±1.54 | 73.84±0.86 | 79.09±0.09 | **70.16**±0.92 | 73.73±2.35 | 89.83±0.27 | 76.37±0.69 |
| GeBERTa<sub>large</sub> | 88.84±0.18 | 82.52±0.59 | 53.76±1.86 | 75.32±0.53 | 78.35±0.08 | 70.02±1.34 | 82.16±2.36 | 90.39±0.24 | 77.67±0.69 |
| [GeBERTa](https://huggingface.co/ikim-uk-essen/geberta-xlarge)<sub>xlarge</sub> | **89.04**±0.26 | **85.05**±0.63 | **55.80**±1.42 | **76.25**±0.704 | 76.71±0.08 | 67.92±1.00 | **82.42**±4.70 | **90.63**±0.21 | **77.98**±0.62 |
## Publication
```bibtex
@inproceedings{dada2023impact,
title={On the Impact of Cross-Domain Data on German Language Models},
author={Dada, Amin and Chen, Aokun and Peng, Cheng and Smith, Kaleb E and Idrissi-Yaghir, Ahmad and Seibold, Constantin Marc and Li, Jianning and Heiliger, Lars and Friedrich, Christoph M and Truhn, Daniel and others},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023}
}
```
Arxiv to link paper on Hugging Face: https://arxiv.org/abs/2310.07321
## Contact
<[email protected]> | [
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING"
] | [
"PMC-PATIENTS"
] |
ikim-uk-essen/geberta-xlarge | ikim-uk-essen | fill-mask | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"arxiv:2310.07321",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-20T12:48:38 | 2025-01-29T16:26:45 | 36 | 1 | ---
license: mit
---
# GeBERTa
<!-- Provide a quick summary of what the model is/does. -->
GeBERTa is a set of German DeBERTa models developed in a joint effort between the University of Florida, NVIDIA, and IKIM.
The models range in size from 122M to 750M parameters.
## Model details
The models follow the architecture of DeBERTa-v2 and make use of sentence piece tokenizers. The base and large models use a 50k token vocabulary,
while the large model uses a 128k token vocabulary. All models were trained with a batch size of 2k for a maximum of 1 million steps
and have a maximum sequence length of 512 tokens.
## Dataset
The pre-training dataset consists of documents from different domains:
| Domain | Dataset | Data Size | #Docs | #Tokens |
| -------- | ----------- | --------- | ------ | ------- |
| Formal | Wikipedia | 9GB | 2,665,357 | 1.9B |
| Formal | News | 28GB | 12,305,326 | 6.1B |
| Formal | GC4 | 90GB | 31,669,772 | 19.4B |
| Informal | Reddit 2019-2023 (GER) | 5.8GB | 15,036,592 | 1.3B |
| Informal | Holiday Reviews | 2GB | 4,876,405 | 428M |
| Legal | OpenLegalData: German cases and laws | 5.4GB | 308,228 | 1B |
| Medical | Smaller public datasets | 253MB | 179,776 | 50M |
| Medical | CC medical texts | 3.6GB | 2,000,000 | 682M |
| Medical | Medicine Dissertations | 1.4GB | 14,496 | 295M |
| Medical | Pubmed abstracts (translated) | 8.5GB | 21,044,382 | 1.7B |
| Medical | MIMIC III (translated) | 2.6GB | 24,221,834 | 695M |
| Medical | PMC-Patients-ReCDS (translated) | 2.1GB | 1,743,344 | 414M |
| Literature | German Fiction | 1.1GB | 3,219 | 243M |
| Literature | English books (translated) | 7.1GB | 11,038 | 1.6B |
| - | Total | 167GB | 116,079,769 | 35.8B |
## Benchmark
In a comprehensive benchmark, we evaluated existing German models and our own. The benchmark included a variety of task types, such as question answering,
classification, and named entity recognition (NER). In addition, we introduced a new task focused on hate speech detection using two existing datasets.
When the datasets provided training, development, and test sets, we used them accordingly.
We randomly split the data into 80% for training, 10% for validation, and 10% for test in cases where such sets were not available.
The following table presents the F1 scores:
| Model | [GE14](https://huggingface.co/datasets/germeval_14) | [GQuAD](https://huggingface.co/datasets/deepset/germanquad) | [GE18](https://huggingface.co/datasets/philschmid/germeval18) | TS | [GGP](https://github.com/JULIELab/GGPOnc) | GRAS<sup>1</sup> | [JS](https://github.com/JULIELab/jsyncc) | [DROC](https://gitlab2.informatik.uni-wuerzburg.de/kallimachos/DROC-Release) | Avg |
|:---------------------:|:--------:|:----------:|:--------:|:--------:|:-------:|:------:|:--------:|:------:|:------:|
| [GBERT](https://huggingface.co/deepset/gbert-large)<sub>large</sub> | 88.48±0.23 | 81.51±0.84 | 54.37±1.65 | 73.60±0.61 | **79.17**±0.14 | 69.28±0.80 | 76.32±4.42 | 90.29±0.15 | 76.63±0.63 |
| [GELECTRA](https://huggingface.co/deepset/gelectra-large)<sub>large</sub> | 88.39±0.13 | 80.51±0.41 | 55.41±1.54 | 73.84±0.86 | 79.09±0.09 | **70.16**±0.92 | 73.73±2.35 | 89.83±0.27 | 76.37±0.69 |
| [GeBERTa](https://huggingface.co/ikim-uk-essen/geberta-large)<sub>large</sub> | 88.84±0.18 | 82.52±0.59 | 53.76±1.86 | 75.32±0.53 | 78.35±0.08 | 70.02±1.34 | 82.16±2.36 | 90.39±0.24 | 77.67±0.69 |
| GeBERTa<sub>xlarge</sub> | **89.04**±0.26 | **85.05**±0.63 | **55.80**±1.42 | **76.25**±0.704 | 76.71±0.08 | 67.92±1.00 | **82.42**±4.70 | **90.63**±0.21 | **77.98**±0.62 |
## Publication
```bibtex
@inproceedings{dada2023impact,
title={On the Impact of Cross-Domain Data on German Language Models},
author={Dada, Amin and Chen, Aokun and Peng, Cheng and Smith, Kaleb E and Idrissi-Yaghir, Ahmad and Seibold, Constantin Marc and Li, Jianning and Heiliger, Lars and Friedrich, Christoph M and Truhn, Daniel and others},
booktitle={The 2023 Conference on Empirical Methods in Natural Language Processing},
year={2023}
}
```
Arxiv to link paper on Hugging Face: https://arxiv.org/abs/2310.07321
## Contact
<[email protected]> | [
"NAMED_ENTITY_RECOGNITION",
"QUESTION_ANSWERING"
] | [
"PMC-PATIENTS"
] |
TRI-ML/DCLM-1B | TRI-ML | null | [
"transformers",
"safetensors",
"openlm",
"arxiv:2406.11794",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:41:43 | 2024-07-25T23:12:22 | 36 | 13 | ---
license: apache-2.0
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/63118add64939fabc0108b28/BB42g4V8HTxb5dR4tcy8A.png" alt="DCLM Logo" width="300" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# Model Card for DCLM-1B
DCLM-1B is a 1.4 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.
The instruction tuned version of this model is available here: https://huggingface.co/TRI-ML/DCLM-1B-IT
## Quickstart
First install open_lm
```
pip install git+https://github.com/mlfoundations/open_lm.git
```
Then you can load the model using HF's Auto classes as follows:
```python
from open_lm.hf import *
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TRI-ML/DCLM-1B")
model = AutoModelForCausalLM.from_pretrained("TRI-ML/DCLM-1B")
inputs = tokenizer(["Machine learning is"], return_tensors="pt")
gen_kwargs = {"max_new_tokens": 50, "top_p": 0.8, "temperature": 0.8, "do_sample": True, "repetition_penalty": 1.1}
output = model.generate(inputs['input_ids'], **gen_kwargs)
output = tokenizer.decode(output[0].tolist(), skip_special_tokens=True)
print(output)
```
## Evaluation
We evaluate DCLM-1B using the [llm-foundry](https://github.com/mosaicml/llm-foundry) eval suite, and compare to recently released small models on key benchmarks.
As described in the paper, Core accuracy is the average of centered accuracy on
22 tasks (including HellaSwag and ARC-E), Extended is centered accuracy averaged
over 53 tasks.
| Model | Params | Tokens | Open dataset? | Core | MMLU 5-shot | Extended |
|-----------------------------------|:--------:|:--------:|:---------------:|:----------:|:----------:|:-----------:|
| **Open weights, closed datasets** | | | | | | |
| Qwen2-1.5B | 1.5B | 7T | ❌ | 42.1 | **56.4** | **32.4** |
| Gemma-2B | 2.5B | 3T | ❌ | **43.3** | 40.8 | 26.6 |
| **Open weights, open datasets** | | | | | | |
| OLMo-1B | 1.2B | 3T | ✅ | 29.7 | 26.0 | 16.1 |
| SmolLM | 1.7B | 1T | ✅ | 36.3 | 30.0 | 21.2 |
| DCLM-1B | 1.4B | 4.3T | ✅ | **45.2** | **47.5** | **28.1** |
## Model Details
| Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
|:------:|:-----------------:|:--------:|:-------------:|:-----------------:|:----------------:|
| 1.4B | 4.3T | 24 | 2048 | 16 | 2048 |
### Model Description
- **Developed by:** DataComp for Language Models (DCLM) Team
- **Model type:** Decoder-only Transformer language model
- **Language(s):** English (primarily)
- **License:** Apache 2.0
- **Contact:** [email protected]
- **Date:** July 2024
### Model Sources
- **Repository:** https://github.com/mlfoundations/dclm
- **Dataset:** https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0
- **Paper:** [DataComp-LM: In search of the next generation of training sets for language models](https://arxiv.org/abs/2406.11794)
### Training Details
The model was trained using the following setup:
- **Architecture:** Decoder-only Transformer
- **Framework:** PyTorch with OpenLM
- **Optimizer:** AdamW
- **Learning Rate:** 1e-2 (peak)
- **Weight Decay:** 1e-2
- **Batch Size:** 2048 sequences
- **Sequence Length:** 2048 tokens
- **Total Training Tokens:** 4.3T
- **Hardware:** Trained on H100 GPUs
We train our 1.4B model for 4.3T tokens on DCLM-Baseline, combined with the
StarCoder and ProofPile2 datasets.
We will update our paper soon with more training details.
### Detailed evaluation
| Task | Score |
|------------------------------------------|---------|
| AGI Eval LSAT AR | 0.2652 |
| AGI Eval LSAT LR | 0.3314 |
| AGI Eval LSAT RC | 0.4179 |
| AGI Eval SAT English | 0.4709 |
| AGI Eval SAT Math (CoT) | 0.0318 |
| AQuA (CoT) | 0.0245 |
| ARC (challenge) | 0.4744 |
| ARC (easy) | 0.7462 |
| BBQ | 0.5151 |
| BigBench Conceptual Combinations | 0.5437 |
| BigBench Conlang Translation | 0.0793 |
| BigBench CS Algorithms | 0.4720 |
| BigBench Dyck Languages | 0.2210 |
| BigBench Elementary Math QA | 0.2598 |
| BigBench Language Identification | 0.3284 |
| BigBench Logical Deduction | 0.2473 |
| BigBench Misconceptions | 0.5662 |
| BigBench Novel Concepts | 0.5000 |
| BigBench Operators | 0.3476 |
| BigBench QA Wikidata | 0.6852 |
| BigBench Repeat Copy Logic | 0.1250 |
| BigBench Strange Stories | 0.6724 |
| BigBench Strategy QA | 0.5671 |
| BigBench Understanding Fables | 0.4603 |
| BoolQ | 0.7382 |
| CommonSenseQA | 0.6708 |
| COPA | 0.8200 |
| CoQA | 0.4314 |
| Enterprise PII Classification | 0.5246 |
| GPQA Diamond | 0.2424 |
| GPQA | 0.2500 |
| GSM8K (CoT) | 0.0629 |
| HellaSwag | 0.7285 |
| HellaSwag (zero-shot) | 0.7162 |
| Jeopardy | 0.4514 |
| LAMBADA (OpenAI) | 0.6992 |
| LogiQA | 0.3103 |
| MathQA | 0.2682 |
| MMLU (few-shot) | 0.4752 |
| MMLU (zero-shot) | 0.4175 |
| OpenBookQA | 0.4280 |
| PIQA | 0.7829 |
| PubMedQA (labeled) | 0.3790 |
| Simple Arithmetic (no spaces) | 0.0650 |
| Simple Arithmetic (with spaces) | 0.0700 |
| SIQA | 0.6868 |
| SQuAD | 0.5494 |
| SVAMP (CoT) | 0.2733 |
| TriviaQA (small subset) | 0.4133 |
| Winogender (MC female) | 0.4667 |
| Winogender (MC male) | 0.4000 |
| Winograd | 0.8608 |
| Winogrande | 0.6630 |
## Limitations and Biases
While DCLM-1B demonstrates strong performance across a range of tasks, it's important to note:
1. The model may exhibit biases present in its training data, which is derived from web crawl data.
2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
3. Performance on tasks not included in the evaluation suite may vary.
4. The model's knowledge is limited to its training data cutoff date.
## Ethical Considerations
Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.
## Citation
If you use this model in your research, please cite:
```
@article{Li2024DataCompLM,
title={DataComp-LM: In search of the next generation of training sets for language models},
author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
journal={arXiv preprint arXiv:2406.11794},
year={2024}
}
```
| [
"TRANSLATION"
] | [
"PUBMEDQA"
] |
mav23/Phi-3-mini-128k-instruct-GGUF | mav23 | text-generation | [
"gguf",
"nlp",
"code",
"text-generation",
"en",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-10-22T09:16:18 | 2024-10-22T09:49:29 | 36 | 0 | ---
language:
- en
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
🎉 **Phi-3.5**: [[mini-instruct]](https://huggingface.co/microsoft/Phi-3.5-mini-instruct); [[MoE-instruct]](https://huggingface.co/microsoft/Phi-3.5-MoE-instruct) ; [[vision-instruct]](https://huggingface.co/microsoft/Phi-3.5-vision-instruct)
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
| [
"SUMMARIZATION"
] | [
"MEDQA"
] |
pankajrajdeo/UMLS-ED-Bioformer-16L-V-1.25 | pankajrajdeo | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:187491593",
"loss:CustomTripletLoss",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:pankajrajdeo/UMLS-ED-Bioformer-16L-V-1",
"base_model:finetune:pankajrajdeo/UMLS-ED-Bioformer-16L-V-1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-11-08T15:22:20 | 2025-01-05T20:23:05 | 36 | 0 | ---
base_model:
- pankajrajdeo/UMLS-ED-Bioformer-16L-V-1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:187491593
- loss:CustomTripletLoss
widget:
- source_sentence: Hylocharis xantusii
sentences:
- Xantus's hummingbird
- C5721346
- C1623532
- Iole viridescens viridescens
- source_sentence: HTLV1+2 RNA XXX Ql PCR
sentences:
- HTLV 1+2 RNA:MevcEşik:Zmlı:XXX:Srl:Prob.amf.hdf
- Nota de progreso:Tipo:Punto temporal:{Configuración}:Documento:Pain medicine
- C0368469
- C4070921
- source_sentence: Degeneração Nigroestriatal
sentences:
- C0270733
- hiperinsulinismo debido a deficiencia de 3-hidroxiacil-coenzima A deshidrogenasa
de cadena corta
- Striatonigral atrophy
- C4303473
- source_sentence: Clostridioides difficile As:titer:moment:serum:semikwantitatief
sentences:
- Dehidroepiandrosteron:MevcEşik:Zmlı:İdrar:Srl
- C0485219
- C0364328
- Clostridium difficile Ac:Título:Pt:Soro:Qn
- source_sentence: E Vicotrat
sentences:
- C2742706
- C2350910
- germanium L-cysteine alpha-tocopherol complex
- Eosine I Bluish, Dipotassium Salt
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("pankajrajdeo/937457_bioformer_16L")
# Run inference
sentences = [
'E Vicotrat',
'Eosine I Bluish, Dipotassium Salt',
'C2742706',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 187,491,593 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_id</code>, <code>positive_id</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_id | positive_id | negative |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 13.27 tokens</li><li>max: 247 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 12.25 tokens</li><li>max: 157 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.27 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 6.49 tokens</li><li>max: 7 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 13.53 tokens</li><li>max: 118 tokens</li></ul> |
* Samples:
| anchor | positive | negative_id | positive_id | negative |
|:----------------------------------------------|:------------------------------------------------------------------------------------------------|:----------------------|:----------------------|:------------------------------------------------------------------------------------------------|
| <code>Zaburzenie metabolizmu minerałów</code> | <code>Distúrbio não especificado do metabolismo de minerais</code> | <code>C2887914</code> | <code>C0154260</code> | <code>Acute alcoholic hepatic failure</code> |
| <code>testy funkčnosti placenty</code> | <code>Metoder som brukes til å vurdere morkakefunksjon.</code> | <code>C2350391</code> | <code>C0032049</code> | <code>Hjärtmuskelscintigrafi</code> |
| <code>Tsefapiriin:Susc:Pt:Is:OrdQn</code> | <code>cefapirina:susceptibilidad:punto en el tiempo:cepa clínica:ordinal o cuantitativo:</code> | <code>C0942365</code> | <code>C0801894</code> | <code>2 proyecciones:hallazgo:punto en el tiempo:tobillo.izquierdo:Narrativo:radiografía</code> |
* Loss: <code>__main__.CustomTripletLoss</code> with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 5
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 50
- `learning_rate`: 2e-05
- `num_train_epochs`: 5
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 50
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 5
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss |
|:------:|:------:|:-------------:|
| 0.0003 | 1000 | 1.0069 |
| 0.0005 | 2000 | 0.9728 |
| 0.0008 | 3000 | 0.9549 |
| 0.0011 | 4000 | 0.9217 |
| 0.0013 | 5000 | 0.9116 |
| 0.0016 | 6000 | 0.8662 |
| 0.0019 | 7000 | 0.8412 |
| 0.0021 | 8000 | 0.7979 |
| 0.0024 | 9000 | 0.7829 |
| 0.0027 | 10000 | 0.7578 |
| 0.0029 | 11000 | 0.7402 |
| 0.0032 | 12000 | 0.7069 |
| 0.0035 | 13000 | 0.6906 |
| 0.0037 | 14000 | 0.6644 |
| 0.0040 | 15000 | 0.6516 |
| 0.0043 | 16000 | 0.6344 |
| 0.0045 | 17000 | 0.6395 |
| 0.0048 | 18000 | 0.6082 |
| 0.0051 | 19000 | 0.5944 |
| 0.0053 | 20000 | 0.5955 |
| 0.0056 | 21000 | 0.576 |
| 0.0059 | 22000 | 0.5723 |
| 0.0061 | 23000 | 0.5475 |
| 0.0064 | 24000 | 0.5452 |
| 0.0067 | 25000 | 0.5485 |
| 0.0069 | 26000 | 0.5143 |
| 0.0072 | 27000 | 0.5062 |
| 0.0075 | 28000 | 0.5118 |
| 0.0077 | 29000 | 0.4992 |
| 0.0080 | 30000 | 0.5031 |
| 0.0083 | 31000 | 0.4762 |
| 0.0085 | 32000 | 0.4773 |
| 0.0088 | 33000 | 0.4742 |
| 0.0091 | 34000 | 0.4692 |
| 0.0093 | 35000 | 0.464 |
| 0.0096 | 36000 | 0.4687 |
| 0.0099 | 37000 | 0.4592 |
| 0.0101 | 38000 | 0.4468 |
| 0.0104 | 39000 | 0.4425 |
| 0.0107 | 40000 | 0.4477 |
| 0.0109 | 41000 | 0.4336 |
| 0.0112 | 42000 | 0.4331 |
| 0.0115 | 43000 | 0.4248 |
| 0.0117 | 44000 | 0.4189 |
| 0.0120 | 45000 | 0.4147 |
| 0.0123 | 46000 | 0.4112 |
| 0.0125 | 47000 | 0.4051 |
| 0.0128 | 48000 | 0.399 |
| 0.0131 | 49000 | 0.3921 |
| 0.0133 | 50000 | 0.3917 |
| 0.0136 | 51000 | 0.4058 |
| 0.0139 | 52000 | 0.3843 |
| 0.0141 | 53000 | 0.3811 |
| 0.0144 | 54000 | 0.3733 |
| 0.0147 | 55000 | 0.3787 |
| 0.0149 | 56000 | 0.3859 |
| 0.0152 | 57000 | 0.3742 |
| 0.0155 | 58000 | 0.3682 |
| 0.0157 | 59000 | 0.3705 |
| 0.0160 | 60000 | 0.3483 |
| 0.0163 | 61000 | 0.3469 |
| 0.0165 | 62000 | 0.3586 |
| 0.0168 | 63000 | 0.3346 |
| 0.0171 | 64000 | 0.3474 |
| 0.0173 | 65000 | 0.3625 |
| 0.0176 | 66000 | 0.3501 |
| 0.0179 | 67000 | 0.3456 |
| 0.0181 | 68000 | 0.3383 |
| 0.0184 | 69000 | 0.3457 |
| 0.0187 | 70000 | 0.3437 |
| 0.0189 | 71000 | 0.3395 |
| 0.0192 | 72000 | 0.3399 |
| 0.0195 | 73000 | 0.324 |
| 0.0197 | 74000 | 0.338 |
| 0.0200 | 75000 | 0.3268 |
| 0.0203 | 76000 | 0.3298 |
| 0.0205 | 77000 | 0.3282 |
| 0.0208 | 78000 | 0.3356 |
| 0.0211 | 79000 | 0.3187 |
| 0.0213 | 80000 | 0.3155 |
| 0.0216 | 81000 | 0.3181 |
| 0.0219 | 82000 | 0.3085 |
| 0.0221 | 83000 | 0.3168 |
| 0.0224 | 84000 | 0.3162 |
| 0.0227 | 85000 | 0.3126 |
| 0.0229 | 86000 | 0.3026 |
| 0.0232 | 87000 | 0.3017 |
| 0.0235 | 88000 | 0.2963 |
| 0.0237 | 89000 | 0.3002 |
| 0.0240 | 90000 | 0.297 |
| 0.0243 | 91000 | 0.2993 |
| 0.0245 | 92000 | 0.306 |
| 0.0248 | 93000 | 0.2964 |
| 0.0251 | 94000 | 0.2992 |
| 0.0253 | 95000 | 0.2921 |
| 0.0256 | 96000 | 0.3103 |
| 0.0259 | 97000 | 0.2897 |
| 0.0261 | 98000 | 0.2843 |
| 0.0264 | 99000 | 0.2914 |
| 0.0267 | 100000 | 0.2952 |
| 0.0269 | 101000 | 0.2922 |
| 0.0272 | 102000 | 0.2807 |
| 0.0275 | 103000 | 0.2797 |
| 0.0277 | 104000 | 0.2849 |
| 0.0280 | 105000 | 0.2959 |
| 0.0283 | 106000 | 0.2823 |
| 0.0285 | 107000 | 0.2637 |
| 0.0288 | 108000 | 0.2804 |
| 0.0291 | 109000 | 0.2761 |
| 0.0293 | 110000 | 0.2821 |
| 0.0296 | 111000 | 0.2876 |
| 0.0299 | 112000 | 0.2699 |
| 0.0301 | 113000 | 0.2758 |
| 0.0304 | 114000 | 0.2802 |
| 0.0307 | 115000 | 0.2689 |
| 0.0309 | 116000 | 0.2871 |
| 0.0312 | 117000 | 0.2603 |
| 0.0315 | 118000 | 0.2728 |
| 0.0317 | 119000 | 0.2769 |
| 0.0320 | 120000 | 0.2527 |
| 0.0323 | 121000 | 0.2677 |
| 0.0325 | 122000 | 0.2748 |
| 0.0328 | 123000 | 0.2648 |
| 0.0331 | 124000 | 0.2645 |
| 0.0333 | 125000 | 0.2637 |
| 0.0336 | 126000 | 0.2613 |
| 0.0339 | 127000 | 0.261 |
| 0.0341 | 128000 | 0.2568 |
| 0.0344 | 129000 | 0.2611 |
| 0.0347 | 130000 | 0.2486 |
| 0.0349 | 131000 | 0.2535 |
| 0.0352 | 132000 | 0.2525 |
| 0.0355 | 133000 | 0.2457 |
| 0.0357 | 134000 | 0.2545 |
| 0.0360 | 135000 | 0.2596 |
| 0.0363 | 136000 | 0.2505 |
| 0.0365 | 137000 | 0.2454 |
| 0.0368 | 138000 | 0.2696 |
| 0.0371 | 139000 | 0.2567 |
| 0.0373 | 140000 | 0.2517 |
| 0.0376 | 141000 | 0.2436 |
| 0.0379 | 142000 | 0.2452 |
| 0.0381 | 143000 | 0.2427 |
| 0.0384 | 144000 | 0.2525 |
| 0.0387 | 145000 | 0.243 |
| 0.0389 | 146000 | 0.2417 |
| 0.0392 | 147000 | 0.2599 |
| 0.0395 | 148000 | 0.246 |
| 0.0397 | 149000 | 0.2379 |
| 0.0400 | 150000 | 0.2449 |
| 0.0403 | 151000 | 0.2333 |
| 0.0405 | 152000 | 0.2399 |
| 0.0408 | 153000 | 0.2409 |
| 0.0411 | 154000 | 0.2407 |
| 0.0413 | 155000 | 0.2369 |
| 0.0416 | 156000 | 0.2361 |
| 0.0419 | 157000 | 0.2331 |
| 0.0421 | 158000 | 0.232 |
| 0.0424 | 159000 | 0.2337 |
| 0.0427 | 160000 | 0.2331 |
| 0.0429 | 161000 | 0.2328 |
| 0.0432 | 162000 | 0.2278 |
| 0.0435 | 163000 | 0.2335 |
| 0.0437 | 164000 | 0.2301 |
| 0.0440 | 165000 | 0.2381 |
| 0.0443 | 166000 | 0.2298 |
| 0.0445 | 167000 | 0.2355 |
| 0.0448 | 168000 | 0.2254 |
| 0.0451 | 169000 | 0.2301 |
| 0.0453 | 170000 | 0.2319 |
| 0.0456 | 171000 | 0.2314 |
| 0.0459 | 172000 | 0.236 |
| 0.0461 | 173000 | 0.2348 |
| 0.0464 | 174000 | 0.231 |
| 0.0467 | 175000 | 0.2291 |
| 0.0469 | 176000 | 0.2246 |
| 0.0472 | 177000 | 0.2259 |
| 0.0475 | 178000 | 0.2254 |
| 0.0477 | 179000 | 0.2223 |
| 0.0480 | 180000 | 0.2285 |
| 0.0483 | 181000 | 0.2306 |
| 0.0485 | 182000 | 0.2233 |
| 0.0488 | 183000 | 0.2117 |
| 0.0491 | 184000 | 0.2219 |
| 0.0493 | 185000 | 0.2226 |
| 0.0496 | 186000 | 0.2161 |
| 0.0499 | 187000 | 0.2195 |
| 0.0501 | 188000 | 0.2208 |
| 0.0504 | 189000 | 0.2198 |
| 0.0507 | 190000 | 0.2236 |
| 0.0509 | 191000 | 0.2178 |
| 0.0512 | 192000 | 0.2087 |
| 0.0515 | 193000 | 0.2222 |
| 0.0517 | 194000 | 0.211 |
| 0.0520 | 195000 | 0.2287 |
| 0.0523 | 196000 | 0.2219 |
| 0.0525 | 197000 | 0.2096 |
| 0.0528 | 198000 | 0.2112 |
| 0.0531 | 199000 | 0.2108 |
| 0.0533 | 200000 | 0.2098 |
| 0.0536 | 201000 | 0.2176 |
| 0.0539 | 202000 | 0.2118 |
| 0.0541 | 203000 | 0.2248 |
| 0.0544 | 204000 | 0.2124 |
| 0.0547 | 205000 | 0.2133 |
| 0.0549 | 206000 | 0.2101 |
| 0.0552 | 207000 | 0.208 |
| 0.0555 | 208000 | 0.2129 |
| 0.0557 | 209000 | 0.208 |
| 0.0560 | 210000 | 0.2093 |
| 0.0563 | 211000 | 0.2123 |
| 0.0565 | 212000 | 0.205 |
| 0.0568 | 213000 | 0.2012 |
| 0.0571 | 214000 | 0.2078 |
| 0.0573 | 215000 | 0.2107 |
| 0.0576 | 216000 | 0.206 |
| 0.0579 | 217000 | 0.2055 |
| 0.0581 | 218000 | 0.2067 |
| 0.0584 | 219000 | 0.2143 |
| 0.0587 | 220000 | 0.204 |
| 0.0589 | 221000 | 0.2071 |
| 0.0592 | 222000 | 0.2026 |
| 0.0595 | 223000 | 0.1994 |
| 0.0597 | 224000 | 0.2045 |
| 0.0600 | 225000 | 0.2155 |
| 0.0603 | 226000 | 0.2075 |
| 0.0605 | 227000 | 0.195 |
| 0.0608 | 228000 | 0.2028 |
| 0.0611 | 229000 | 0.1973 |
| 0.0613 | 230000 | 0.2034 |
| 0.0616 | 231000 | 0.2039 |
| 0.0619 | 232000 | 0.1937 |
| 0.0621 | 233000 | 0.2 |
| 0.0624 | 234000 | 0.1958 |
| 0.0627 | 235000 | 0.1986 |
| 0.0629 | 236000 | 0.1975 |
| 0.0632 | 237000 | 0.2061 |
| 0.0635 | 238000 | 0.2021 |
| 0.0637 | 239000 | 0.1957 |
| 0.0640 | 240000 | 0.1997 |
| 0.0643 | 241000 | 0.1968 |
| 0.0645 | 242000 | 0.1881 |
| 0.0648 | 243000 | 0.2038 |
| 0.0651 | 244000 | 0.1991 |
| 0.0653 | 245000 | 0.1841 |
| 0.0656 | 246000 | 0.1919 |
| 0.0659 | 247000 | 0.187 |
| 0.0661 | 248000 | 0.1889 |
| 0.0664 | 249000 | 0.1987 |
| 0.0667 | 250000 | 0.1992 |
| 0.0669 | 251000 | 0.1913 |
| 0.0672 | 252000 | 0.1995 |
| 0.0675 | 253000 | 0.1875 |
| 0.0677 | 254000 | 0.1923 |
| 0.0680 | 255000 | 0.1773 |
| 0.0683 | 256000 | 0.1869 |
| 0.0685 | 257000 | 0.1975 |
| 0.0688 | 258000 | 0.1865 |
| 0.0691 | 259000 | 0.1889 |
| 0.0693 | 260000 | 0.1896 |
| 0.0696 | 261000 | 0.1829 |
| 0.0699 | 262000 | 0.1843 |
| 0.0701 | 263000 | 0.195 |
| 0.0704 | 264000 | 0.1818 |
| 0.0707 | 265000 | 0.1855 |
| 0.0709 | 266000 | 0.1841 |
| 0.0712 | 267000 | 0.1889 |
| 0.0715 | 268000 | 0.1814 |
| 0.0717 | 269000 | 0.1917 |
| 0.0720 | 270000 | 0.1862 |
| 0.0723 | 271000 | 0.1869 |
| 0.0725 | 272000 | 0.1859 |
| 0.0728 | 273000 | 0.182 |
| 0.0731 | 274000 | 0.1896 |
| 0.0733 | 275000 | 0.1936 |
| 0.0736 | 276000 | 0.1846 |
| 0.0739 | 277000 | 0.18 |
| 0.0741 | 278000 | 0.1812 |
| 0.0744 | 279000 | 0.1859 |
| 0.0747 | 280000 | 0.1785 |
| 0.0749 | 281000 | 0.1806 |
| 0.0752 | 282000 | 0.182 |
| 0.0755 | 283000 | 0.1848 |
| 0.0757 | 284000 | 0.1798 |
| 0.0760 | 285000 | 0.1853 |
| 0.0763 | 286000 | 0.1834 |
| 0.0765 | 287000 | 0.1815 |
| 0.0768 | 288000 | 0.1819 |
| 0.0771 | 289000 | 0.1808 |
| 0.0773 | 290000 | 0.1851 |
| 0.0776 | 291000 | 0.1823 |
| 0.0779 | 292000 | 0.179 |
| 0.0781 | 293000 | 0.1825 |
| 0.0784 | 294000 | 0.1751 |
| 0.0787 | 295000 | 0.1778 |
| 0.0789 | 296000 | 0.1773 |
| 0.0792 | 297000 | 0.1795 |
| 0.0795 | 298000 | 0.1854 |
| 0.0797 | 299000 | 0.1818 |
| 0.0800 | 300000 | 0.1734 |
| 0.0803 | 301000 | 0.1787 |
| 0.0805 | 302000 | 0.1807 |
| 0.0808 | 303000 | 0.1817 |
| 0.0811 | 304000 | 0.1722 |
| 0.0813 | 305000 | 0.1762 |
| 0.0816 | 306000 | 0.1741 |
| 0.0819 | 307000 | 0.1754 |
| 0.0821 | 308000 | 0.1713 |
| 0.0824 | 309000 | 0.1724 |
| 0.0827 | 310000 | 0.1745 |
| 0.0829 | 311000 | 0.1774 |
| 0.0832 | 312000 | 0.1763 |
| 0.0835 | 313000 | 0.1768 |
| 0.0837 | 314000 | 0.1717 |
| 0.0840 | 315000 | 0.1692 |
| 0.0843 | 316000 | 0.1721 |
| 0.0845 | 317000 | 0.1673 |
| 0.0848 | 318000 | 0.1762 |
| 0.0851 | 319000 | 0.1784 |
| 0.0853 | 320000 | 0.1697 |
| 0.0856 | 321000 | 0.172 |
| 0.0859 | 322000 | 0.1658 |
| 0.0861 | 323000 | 0.1761 |
| 0.0864 | 324000 | 0.1729 |
| 0.0867 | 325000 | 0.1672 |
| 0.0869 | 326000 | 0.1671 |
| 0.0872 | 327000 | 0.1685 |
| 0.0875 | 328000 | 0.1729 |
| 0.0877 | 329000 | 0.166 |
| 0.0880 | 330000 | 0.1712 |
| 0.0883 | 331000 | 0.1737 |
| 0.0885 | 332000 | 0.1723 |
| 0.0888 | 333000 | 0.1705 |
| 0.0891 | 334000 | 0.1718 |
| 0.0893 | 335000 | 0.1689 |
| 0.0896 | 336000 | 0.1747 |
| 0.0899 | 337000 | 0.1696 |
| 0.0901 | 338000 | 0.1712 |
| 0.0904 | 339000 | 0.1674 |
| 0.0907 | 340000 | 0.1709 |
| 0.0909 | 341000 | 0.169 |
| 0.0912 | 342000 | 0.1714 |
| 0.0915 | 343000 | 0.1544 |
| 0.0917 | 344000 | 0.1755 |
| 0.0920 | 345000 | 0.1689 |
| 0.0923 | 346000 | 0.1561 |
| 0.0925 | 347000 | 0.1712 |
| 0.0928 | 348000 | 0.1583 |
| 0.0931 | 349000 | 0.159 |
| 0.0933 | 350000 | 0.1715 |
| 0.0936 | 351000 | 0.1608 |
| 0.0939 | 352000 | 0.1703 |
| 0.0941 | 353000 | 0.1682 |
| 0.0944 | 354000 | 0.1622 |
| 0.0947 | 355000 | 0.1663 |
| 0.0949 | 356000 | 0.1632 |
| 0.0952 | 357000 | 0.1663 |
| 0.0955 | 358000 | 0.1643 |
| 0.0957 | 359000 | 0.1674 |
| 0.0960 | 360000 | 0.1634 |
| 0.0963 | 361000 | 0.1616 |
| 0.0965 | 362000 | 0.1691 |
| 0.0968 | 363000 | 0.1594 |
| 0.0971 | 364000 | 0.1589 |
| 0.0973 | 365000 | 0.1568 |
| 0.0976 | 366000 | 0.1586 |
| 0.0979 | 367000 | 0.1555 |
| 0.0981 | 368000 | 0.161 |
| 0.0984 | 369000 | 0.1615 |
| 0.0987 | 370000 | 0.1691 |
| 0.0989 | 371000 | 0.151 |
| 0.0992 | 372000 | 0.1653 |
| 0.0995 | 373000 | 0.1545 |
| 0.0997 | 374000 | 0.1627 |
| 0.1000 | 375000 | 0.1688 |
| 0.1003 | 376000 | 0.1594 |
| 0.1005 | 377000 | 0.1619 |
| 0.1008 | 378000 | 0.1517 |
| 0.1011 | 379000 | 0.1605 |
| 0.1013 | 380000 | 0.1576 |
| 0.1016 | 381000 | 0.1589 |
| 0.1019 | 382000 | 0.1643 |
| 0.1021 | 383000 | 0.164 |
| 0.1024 | 384000 | 0.158 |
| 0.1027 | 385000 | 0.1584 |
| 0.1029 | 386000 | 0.1565 |
| 0.1032 | 387000 | 0.1566 |
| 0.1035 | 388000 | 0.1625 |
| 0.1037 | 389000 | 0.1569 |
| 0.1040 | 390000 | 0.159 |
| 0.1043 | 391000 | 0.1541 |
| 0.1045 | 392000 | 0.159 |
| 0.1048 | 393000 | 0.1536 |
| 0.1051 | 394000 | 0.166 |
| 0.1053 | 395000 | 0.1639 |
| 0.1056 | 396000 | 0.1491 |
| 0.1059 | 397000 | 0.1567 |
| 0.1061 | 398000 | 0.1566 |
| 0.1064 | 399000 | 0.1641 |
| 0.1067 | 400000 | 0.1552 |
| 0.1069 | 401000 | 0.1476 |
| 0.1072 | 402000 | 0.157 |
| 0.1075 | 403000 | 0.1538 |
| 0.1077 | 404000 | 0.152 |
| 0.1080 | 405000 | 0.1525 |
| 0.1083 | 406000 | 0.155 |
| 0.1085 | 407000 | 0.1538 |
| 0.1088 | 408000 | 0.1506 |
| 0.1091 | 409000 | 0.1481 |
| 0.1093 | 410000 | 0.1603 |
| 0.1096 | 411000 | 0.1509 |
| 0.1099 | 412000 | 0.1628 |
| 0.1101 | 413000 | 0.151 |
| 0.1104 | 414000 | 0.1581 |
| 0.1107 | 415000 | 0.1511 |
| 0.1109 | 416000 | 0.1552 |
| 0.1112 | 417000 | 0.1553 |
| 0.1115 | 418000 | 0.1508 |
| 0.1117 | 419000 | 0.1515 |
| 0.1120 | 420000 | 0.1526 |
| 0.1123 | 421000 | 0.15 |
| 0.1125 | 422000 | 0.1497 |
| 0.1128 | 423000 | 0.1526 |
| 0.1131 | 424000 | 0.1547 |
| 0.1133 | 425000 | 0.151 |
| 0.1136 | 426000 | 0.1471 |
| 0.1139 | 427000 | 0.1576 |
| 0.1141 | 428000 | 0.1522 |
| 0.1144 | 429000 | 0.1506 |
| 0.1147 | 430000 | 0.1495 |
| 0.1149 | 431000 | 0.1518 |
| 0.1152 | 432000 | 0.1467 |
| 0.1155 | 433000 | 0.1511 |
| 0.1157 | 434000 | 0.1516 |
| 0.1160 | 435000 | 0.1476 |
| 0.1163 | 436000 | 0.1526 |
| 0.1165 | 437000 | 0.1474 |
| 0.1168 | 438000 | 0.1445 |
| 0.1171 | 439000 | 0.1408 |
| 0.1173 | 440000 | 0.1412 |
| 0.1176 | 441000 | 0.1445 |
| 0.1179 | 442000 | 0.145 |
| 0.1181 | 443000 | 0.1402 |
| 0.1184 | 444000 | 0.154 |
| 0.1187 | 445000 | 0.1446 |
| 0.1189 | 446000 | 0.1476 |
| 0.1192 | 447000 | 0.1565 |
| 0.1195 | 448000 | 0.1409 |
| 0.1197 | 449000 | 0.1511 |
| 0.1200 | 450000 | 0.139 |
| 0.1203 | 451000 | 0.1463 |
| 0.1205 | 452000 | 0.1453 |
| 0.1208 | 453000 | 0.1432 |
| 0.1211 | 454000 | 0.1559 |
| 0.1213 | 455000 | 0.1354 |
| 0.1216 | 456000 | 0.1419 |
| 0.1219 | 457000 | 0.1452 |
| 0.1221 | 458000 | 0.147 |
| 0.1224 | 459000 | 0.1453 |
| 0.1227 | 460000 | 0.153 |
| 0.1229 | 461000 | 0.1496 |
| 0.1232 | 462000 | 0.1464 |
| 0.1235 | 463000 | 0.1423 |
| 0.1237 | 464000 | 0.1403 |
| 0.1240 | 465000 | 0.1458 |
| 0.1243 | 466000 | 0.1508 |
| 0.1245 | 467000 | 0.1442 |
| 0.1248 | 468000 | 0.1521 |
| 0.1251 | 469000 | 0.1424 |
| 0.1253 | 470000 | 0.1545 |
| 0.1256 | 471000 | 0.1389 |
| 0.1259 | 472000 | 0.1408 |
| 0.1261 | 473000 | 0.1398 |
| 0.1264 | 474000 | 0.1333 |
| 0.1267 | 475000 | 0.1436 |
| 0.1269 | 476000 | 0.1423 |
| 0.1272 | 477000 | 0.1393 |
| 0.1275 | 478000 | 0.1465 |
| 0.1277 | 479000 | 0.1484 |
| 0.1280 | 480000 | 0.1412 |
| 0.1283 | 481000 | 0.143 |
| 0.1285 | 482000 | 0.139 |
| 0.1288 | 483000 | 0.1447 |
| 0.1291 | 484000 | 0.1388 |
| 0.1293 | 485000 | 0.1414 |
| 0.1296 | 486000 | 0.1444 |
| 0.1299 | 487000 | 0.1365 |
| 0.1301 | 488000 | 0.1403 |
| 0.1304 | 489000 | 0.1398 |
| 0.1307 | 490000 | 0.1302 |
| 0.1309 | 491000 | 0.1443 |
| 0.1312 | 492000 | 0.1402 |
| 0.1315 | 493000 | 0.1451 |
| 0.1317 | 494000 | 0.1397 |
| 0.1320 | 495000 | 0.137 |
| 0.1323 | 496000 | 0.1493 |
| 0.1325 | 497000 | 0.1415 |
| 0.1328 | 498000 | 0.1365 |
| 0.1331 | 499000 | 0.1323 |
| 0.1333 | 500000 | 0.1384 |
| 0.1336 | 501000 | 0.1307 |
| 0.1339 | 502000 | 0.1385 |
| 0.1341 | 503000 | 0.1394 |
| 0.1344 | 504000 | 0.1393 |
| 0.1347 | 505000 | 0.1455 |
| 0.1349 | 506000 | 0.1374 |
| 0.1352 | 507000 | 0.1381 |
| 0.1355 | 508000 | 0.1363 |
| 0.1357 | 509000 | 0.1392 |
| 0.1360 | 510000 | 0.1399 |
| 0.1363 | 511000 | 0.1356 |
| 0.1365 | 512000 | 0.1395 |
| 0.1368 | 513000 | 0.1402 |
| 0.1371 | 514000 | 0.1382 |
| 0.1373 | 515000 | 0.1408 |
| 0.1376 | 516000 | 0.1398 |
| 0.1379 | 517000 | 0.1405 |
| 0.1381 | 518000 | 0.1351 |
| 0.1384 | 519000 | 0.1371 |
| 0.1387 | 520000 | 0.1302 |
| 0.1389 | 521000 | 0.14 |
| 0.1392 | 522000 | 0.1363 |
| 0.1395 | 523000 | 0.1313 |
| 0.1397 | 524000 | 0.1299 |
| 0.1400 | 525000 | 0.1372 |
| 0.1403 | 526000 | 0.1416 |
| 0.1405 | 527000 | 0.1295 |
| 0.1408 | 528000 | 0.1359 |
| 0.1411 | 529000 | 0.1383 |
| 0.1413 | 530000 | 0.1378 |
| 0.1416 | 531000 | 0.135 |
| 0.1419 | 532000 | 0.1405 |
| 0.1421 | 533000 | 0.14 |
| 0.1424 | 534000 | 0.1321 |
| 0.1427 | 535000 | 0.1303 |
| 0.1429 | 536000 | 0.1319 |
| 0.1432 | 537000 | 0.1312 |
| 0.1435 | 538000 | 0.1338 |
| 0.1437 | 539000 | 0.1361 |
| 0.1440 | 540000 | 0.139 |
| 0.1443 | 541000 | 0.1364 |
| 0.1445 | 542000 | 0.1316 |
| 0.1448 | 543000 | 0.1331 |
| 0.1451 | 544000 | 0.1269 |
| 0.1453 | 545000 | 0.1294 |
| 0.1456 | 546000 | 0.135 |
| 0.1459 | 547000 | 0.1328 |
| 0.1461 | 548000 | 0.1296 |
| 0.1464 | 549000 | 0.1305 |
| 0.1467 | 550000 | 0.1334 |
| 0.1469 | 551000 | 0.1362 |
| 0.1472 | 552000 | 0.1318 |
| 0.1475 | 553000 | 0.1312 |
| 0.1477 | 554000 | 0.1293 |
| 0.1480 | 555000 | 0.1324 |
| 0.1483 | 556000 | 0.1256 |
| 0.1485 | 557000 | 0.1227 |
| 0.1488 | 558000 | 0.1239 |
| 0.1491 | 559000 | 0.1287 |
| 0.1493 | 560000 | 0.1307 |
| 0.1496 | 561000 | 0.1336 |
| 0.1499 | 562000 | 0.133 |
| 0.1501 | 563000 | 0.1278 |
| 0.1504 | 564000 | 0.1339 |
| 0.1507 | 565000 | 0.1321 |
| 0.1509 | 566000 | 0.1322 |
| 0.1512 | 567000 | 0.1262 |
| 0.1515 | 568000 | 0.1331 |
| 0.1517 | 569000 | 0.1361 |
| 0.1520 | 570000 | 0.1307 |
| 0.1523 | 571000 | 0.133 |
| 0.1525 | 572000 | 0.1293 |
| 0.1528 | 573000 | 0.1283 |
| 0.1531 | 574000 | 0.1275 |
| 0.1533 | 575000 | 0.1329 |
| 0.1536 | 576000 | 0.1307 |
| 0.1539 | 577000 | 0.1245 |
| 0.1541 | 578000 | 0.1313 |
| 0.1544 | 579000 | 0.1256 |
| 0.1547 | 580000 | 0.1257 |
| 0.1549 | 581000 | 0.1194 |
| 0.1552 | 582000 | 0.125 |
| 0.1555 | 583000 | 0.1345 |
| 0.1557 | 584000 | 0.1308 |
| 0.1560 | 585000 | 0.1318 |
| 0.1563 | 586000 | 0.1348 |
| 0.1565 | 587000 | 0.1231 |
| 0.1568 | 588000 | 0.1282 |
| 0.1571 | 589000 | 0.1281 |
| 0.1573 | 590000 | 0.1221 |
| 0.1576 | 591000 | 0.1234 |
| 0.1579 | 592000 | 0.1334 |
| 0.1581 | 593000 | 0.1249 |
| 0.1584 | 594000 | 0.1216 |
| 0.1587 | 595000 | 0.1295 |
| 0.1589 | 596000 | 0.1191 |
| 0.1592 | 597000 | 0.1267 |
| 0.1595 | 598000 | 0.1273 |
| 0.1597 | 599000 | 0.124 |
| 0.1600 | 600000 | 0.1271 |
| 0.1603 | 601000 | 0.1284 |
| 0.1605 | 602000 | 0.1285 |
| 0.1608 | 603000 | 0.1288 |
| 0.1611 | 604000 | 0.1252 |
| 0.1613 | 605000 | 0.1255 |
| 0.1616 | 606000 | 0.1289 |
| 0.1619 | 607000 | 0.1294 |
| 0.1621 | 608000 | 0.1294 |
| 0.1624 | 609000 | 0.1288 |
| 0.1627 | 610000 | 0.1336 |
| 0.1629 | 611000 | 0.125 |
| 0.1632 | 612000 | 0.1288 |
| 0.1635 | 613000 | 0.122 |
| 0.1637 | 614000 | 0.1204 |
| 0.1640 | 615000 | 0.1245 |
| 0.1643 | 616000 | 0.1303 |
| 0.1645 | 617000 | 0.1187 |
| 0.1648 | 618000 | 0.1223 |
| 0.1651 | 619000 | 0.1311 |
| 0.1653 | 620000 | 0.1202 |
| 0.1656 | 621000 | 0.1271 |
| 0.1659 | 622000 | 0.1218 |
| 0.1661 | 623000 | 0.1218 |
| 0.1664 | 624000 | 0.1247 |
| 0.1667 | 625000 | 0.1289 |
| 0.1669 | 626000 | 0.1261 |
| 0.1672 | 627000 | 0.1262 |
| 0.1675 | 628000 | 0.1251 |
| 0.1677 | 629000 | 0.1271 |
| 0.1680 | 630000 | 0.1243 |
| 0.1683 | 631000 | 0.1266 |
| 0.1685 | 632000 | 0.1257 |
| 0.1688 | 633000 | 0.1215 |
| 0.1691 | 634000 | 0.1236 |
| 0.1693 | 635000 | 0.1267 |
| 0.1696 | 636000 | 0.1209 |
| 0.1699 | 637000 | 0.1188 |
| 0.1701 | 638000 | 0.1267 |
| 0.1704 | 639000 | 0.1259 |
| 0.1707 | 640000 | 0.1225 |
| 0.1709 | 641000 | 0.1183 |
| 0.1712 | 642000 | 0.1202 |
| 0.1715 | 643000 | 0.1279 |
| 0.1717 | 644000 | 0.1191 |
| 0.1720 | 645000 | 0.1206 |
| 0.1723 | 646000 | 0.1178 |
| 0.1725 | 647000 | 0.1234 |
| 0.1728 | 648000 | 0.1259 |
| 0.1731 | 649000 | 0.1227 |
| 0.1733 | 650000 | 0.1211 |
| 0.1736 | 651000 | 0.1216 |
| 0.1739 | 652000 | 0.1182 |
| 0.1741 | 653000 | 0.1205 |
| 0.1744 | 654000 | 0.1187 |
| 0.1747 | 655000 | 0.1144 |
| 0.1749 | 656000 | 0.1216 |
| 0.1752 | 657000 | 0.1287 |
| 0.1755 | 658000 | 0.122 |
| 0.1757 | 659000 | 0.1213 |
| 0.1760 | 660000 | 0.1217 |
| 0.1763 | 661000 | 0.1256 |
| 0.1765 | 662000 | 0.1227 |
| 0.1768 | 663000 | 0.1219 |
| 0.1771 | 664000 | 0.1261 |
| 0.1773 | 665000 | 0.1169 |
| 0.1776 | 666000 | 0.1192 |
| 0.1779 | 667000 | 0.1187 |
| 0.1781 | 668000 | 0.1117 |
| 0.1784 | 669000 | 0.1189 |
| 0.1787 | 670000 | 0.12 |
| 0.1789 | 671000 | 0.1204 |
| 0.1792 | 672000 | 0.1208 |
| 0.1795 | 673000 | 0.119 |
| 0.1797 | 674000 | 0.1161 |
| 0.1800 | 675000 | 0.1167 |
| 0.1803 | 676000 | 0.1235 |
| 0.1805 | 677000 | 0.1276 |
| 0.1808 | 678000 | 0.1188 |
| 0.1811 | 679000 | 0.1135 |
| 0.1813 | 680000 | 0.1187 |
| 0.1816 | 681000 | 0.1165 |
| 0.1819 | 682000 | 0.1224 |
| 0.1821 | 683000 | 0.125 |
| 0.1824 | 684000 | 0.1146 |
| 0.1827 | 685000 | 0.1162 |
| 0.1829 | 686000 | 0.1172 |
| 0.1832 | 687000 | 0.1197 |
| 0.1835 | 688000 | 0.113 |
| 0.1837 | 689000 | 0.1216 |
| 0.1840 | 690000 | 0.1144 |
| 0.1843 | 691000 | 0.1274 |
| 0.1845 | 692000 | 0.1136 |
| 0.1848 | 693000 | 0.1202 |
| 0.1851 | 694000 | 0.1249 |
| 0.1853 | 695000 | 0.1195 |
| 0.1856 | 696000 | 0.1158 |
| 0.1859 | 697000 | 0.1145 |
| 0.1861 | 698000 | 0.1187 |
| 0.1864 | 699000 | 0.1173 |
| 0.1867 | 700000 | 0.1181 |
| 0.1869 | 701000 | 0.1236 |
| 0.1872 | 702000 | 0.1223 |
| 0.1875 | 703000 | 0.1147 |
| 0.1877 | 704000 | 0.1197 |
| 0.1880 | 705000 | 0.1125 |
| 0.1883 | 706000 | 0.1175 |
| 0.1885 | 707000 | 0.1239 |
| 0.1888 | 708000 | 0.1263 |
| 0.1891 | 709000 | 0.1229 |
| 0.1893 | 710000 | 0.1202 |
| 0.1896 | 711000 | 0.1159 |
| 0.1899 | 712000 | 0.1232 |
| 0.1901 | 713000 | 0.1197 |
| 0.1904 | 714000 | 0.121 |
| 0.1907 | 715000 | 0.1189 |
| 0.1909 | 716000 | 0.1183 |
| 0.1912 | 717000 | 0.1091 |
| 0.1915 | 718000 | 0.1186 |
| 0.1917 | 719000 | 0.115 |
| 0.1920 | 720000 | 0.1146 |
| 0.1923 | 721000 | 0.1165 |
| 0.1925 | 722000 | 0.1192 |
| 0.1928 | 723000 | 0.1163 |
| 0.1931 | 724000 | 0.1162 |
| 0.1933 | 725000 | 0.1156 |
| 0.1936 | 726000 | 0.1218 |
| 0.1939 | 727000 | 0.1154 |
| 0.1941 | 728000 | 0.1131 |
| 0.1944 | 729000 | 0.118 |
| 0.1947 | 730000 | 0.1156 |
| 0.1949 | 731000 | 0.1193 |
| 0.1952 | 732000 | 0.1143 |
| 0.1955 | 733000 | 0.1211 |
| 0.1957 | 734000 | 0.1187 |
| 0.1960 | 735000 | 0.12 |
| 0.1963 | 736000 | 0.1164 |
| 0.1965 | 737000 | 0.1173 |
| 0.1968 | 738000 | 0.1151 |
| 0.1971 | 739000 | 0.1143 |
| 0.1973 | 740000 | 0.1141 |
| 0.1976 | 741000 | 0.1174 |
| 0.1979 | 742000 | 0.1185 |
| 0.1981 | 743000 | 0.1133 |
| 0.1984 | 744000 | 0.1174 |
| 0.1987 | 745000 | 0.1154 |
| 0.1989 | 746000 | 0.1138 |
| 0.1992 | 747000 | 0.1203 |
| 0.1995 | 748000 | 0.1119 |
| 0.1997 | 749000 | 0.111 |
| 0.2000 | 750000 | 0.1174 |
| 0.2003 | 751000 | 0.1204 |
| 0.2005 | 752000 | 0.1177 |
| 0.2008 | 753000 | 0.1139 |
| 0.2011 | 754000 | 0.1138 |
| 0.2013 | 755000 | 0.1179 |
| 0.2016 | 756000 | 0.1094 |
| 0.2019 | 757000 | 0.1092 |
| 0.2021 | 758000 | 0.1108 |
| 0.2024 | 759000 | 0.1125 |
| 0.2027 | 760000 | 0.1202 |
| 0.2029 | 761000 | 0.1119 |
| 0.2032 | 762000 | 0.1151 |
| 0.2035 | 763000 | 0.1169 |
| 0.2037 | 764000 | 0.1109 |
| 0.2040 | 765000 | 0.1112 |
| 0.2043 | 766000 | 0.1102 |
| 0.2045 | 767000 | 0.119 |
| 0.2048 | 768000 | 0.1131 |
| 0.2051 | 769000 | 0.1155 |
| 0.2053 | 770000 | 0.1133 |
| 0.2056 | 771000 | 0.1127 |
| 0.2059 | 772000 | 0.1116 |
| 0.2061 | 773000 | 0.1122 |
| 0.2064 | 774000 | 0.1151 |
| 0.2067 | 775000 | 0.1163 |
| 0.2069 | 776000 | 0.1162 |
| 0.2072 | 777000 | 0.1096 |
| 0.2075 | 778000 | 0.1151 |
| 0.2077 | 779000 | 0.1156 |
| 0.2080 | 780000 | 0.1135 |
| 0.2083 | 781000 | 0.1084 |
| 0.2085 | 782000 | 0.114 |
| 0.2088 | 783000 | 0.1128 |
| 0.2091 | 784000 | 0.1142 |
| 0.2093 | 785000 | 0.1092 |
| 0.2096 | 786000 | 0.1067 |
| 0.2099 | 787000 | 0.1156 |
| 0.2101 | 788000 | 0.1094 |
| 0.2104 | 789000 | 0.1078 |
| 0.2107 | 790000 | 0.1133 |
| 0.2109 | 791000 | 0.1165 |
| 0.2112 | 792000 | 0.1116 |
| 0.2115 | 793000 | 0.1111 |
| 0.2117 | 794000 | 0.1086 |
| 0.2120 | 795000 | 0.1114 |
| 0.2123 | 796000 | 0.1069 |
| 0.2125 | 797000 | 0.1094 |
| 0.2128 | 798000 | 0.1125 |
| 0.2131 | 799000 | 0.112 |
| 0.2133 | 800000 | 0.1107 |
| 0.2136 | 801000 | 0.1085 |
| 0.2139 | 802000 | 0.1067 |
| 0.2141 | 803000 | 0.1149 |
| 0.2144 | 804000 | 0.1068 |
| 0.2147 | 805000 | 0.1124 |
| 0.2149 | 806000 | 0.1109 |
| 0.2152 | 807000 | 0.1094 |
| 0.2155 | 808000 | 0.1097 |
| 0.2157 | 809000 | 0.1106 |
| 0.2160 | 810000 | 0.1152 |
| 0.2163 | 811000 | 0.1123 |
| 0.2165 | 812000 | 0.1102 |
| 0.2168 | 813000 | 0.11 |
| 0.2171 | 814000 | 0.1 |
| 0.2173 | 815000 | 0.1127 |
| 0.2176 | 816000 | 0.1135 |
| 0.2179 | 817000 | 0.1127 |
| 0.2181 | 818000 | 0.108 |
| 0.2184 | 819000 | 0.1119 |
| 0.2187 | 820000 | 0.1103 |
| 0.2189 | 821000 | 0.1084 |
| 0.2192 | 822000 | 0.1076 |
| 0.2195 | 823000 | 0.1145 |
| 0.2197 | 824000 | 0.109 |
| 0.2200 | 825000 | 0.1119 |
| 0.2203 | 826000 | 0.1117 |
| 0.2205 | 827000 | 0.1117 |
| 0.2208 | 828000 | 0.1062 |
| 0.2211 | 829000 | 0.1113 |
| 0.2213 | 830000 | 0.1101 |
| 0.2216 | 831000 | 0.1053 |
| 0.2219 | 832000 | 0.1122 |
| 0.2221 | 833000 | 0.1091 |
| 0.2224 | 834000 | 0.1106 |
| 0.2227 | 835000 | 0.1062 |
| 0.2229 | 836000 | 0.1091 |
| 0.2232 | 837000 | 0.1144 |
| 0.2235 | 838000 | 0.1106 |
| 0.2237 | 839000 | 0.1058 |
| 0.2240 | 840000 | 0.1085 |
| 0.2243 | 841000 | 0.1154 |
| 0.2245 | 842000 | 0.1096 |
| 0.2248 | 843000 | 0.1062 |
| 0.2251 | 844000 | 0.1089 |
| 0.2253 | 845000 | 0.108 |
| 0.2256 | 846000 | 0.1086 |
| 0.2259 | 847000 | 0.1084 |
| 0.2261 | 848000 | 0.1056 |
| 0.2264 | 849000 | 0.1042 |
| 0.2267 | 850000 | 0.1204 |
| 0.2269 | 851000 | 0.1053 |
| 0.2272 | 852000 | 0.1053 |
| 0.2275 | 853000 | 0.1065 |
| 0.2277 | 854000 | 0.1157 |
| 0.2280 | 855000 | 0.1112 |
| 0.2283 | 856000 | 0.1058 |
| 0.2285 | 857000 | 0.1084 |
| 0.2288 | 858000 | 0.1066 |
| 0.2291 | 859000 | 0.1116 |
| 0.2293 | 860000 | 0.1047 |
| 0.2296 | 861000 | 0.1145 |
| 0.2299 | 862000 | 0.1094 |
| 0.2301 | 863000 | 0.1108 |
| 0.2304 | 864000 | 0.1038 |
| 0.2307 | 865000 | 0.1044 |
| 0.2309 | 866000 | 0.106 |
| 0.2312 | 867000 | 0.105 |
| 0.2315 | 868000 | 0.108 |
| 0.2317 | 869000 | 0.1108 |
| 0.2320 | 870000 | 0.113 |
| 0.2323 | 871000 | 0.108 |
| 0.2325 | 872000 | 0.1069 |
| 0.2328 | 873000 | 0.1098 |
| 0.2331 | 874000 | 0.1021 |
| 0.2333 | 875000 | 0.109 |
| 0.2336 | 876000 | 0.1104 |
| 0.2339 | 877000 | 0.1043 |
| 0.2341 | 878000 | 0.1057 |
| 0.2344 | 879000 | 0.105 |
| 0.2347 | 880000 | 0.1042 |
| 0.2349 | 881000 | 0.1116 |
| 0.2352 | 882000 | 0.1151 |
| 0.2355 | 883000 | 0.1043 |
| 0.2357 | 884000 | 0.1023 |
| 0.2360 | 885000 | 0.1084 |
| 0.2363 | 886000 | 0.1103 |
| 0.2365 | 887000 | 0.1028 |
| 0.2368 | 888000 | 0.1055 |
| 0.2371 | 889000 | 0.1023 |
| 0.2373 | 890000 | 0.1099 |
| 0.2376 | 891000 | 0.1037 |
| 0.2379 | 892000 | 0.1068 |
| 0.2381 | 893000 | 0.1128 |
| 0.2384 | 894000 | 0.1023 |
| 0.2387 | 895000 | 0.1023 |
| 0.2389 | 896000 | 0.106 |
| 0.2392 | 897000 | 0.1005 |
| 0.2395 | 898000 | 0.1013 |
| 0.2397 | 899000 | 0.1131 |
| 0.2400 | 900000 | 0.107 |
| 0.2403 | 901000 | 0.1096 |
| 0.2405 | 902000 | 0.0963 |
| 0.2408 | 903000 | 0.1076 |
| 0.2411 | 904000 | 0.102 |
| 0.2413 | 905000 | 0.1147 |
| 0.2416 | 906000 | 0.1111 |
| 0.2419 | 907000 | 0.1035 |
| 0.2421 | 908000 | 0.1059 |
| 0.2424 | 909000 | 0.1037 |
| 0.2427 | 910000 | 0.1047 |
| 0.2429 | 911000 | 0.1049 |
| 0.2432 | 912000 | 0.1097 |
| 0.2435 | 913000 | 0.1062 |
| 0.2437 | 914000 | 0.1016 |
| 0.2440 | 915000 | 0.1061 |
| 0.2443 | 916000 | 0.1089 |
| 0.2445 | 917000 | 0.1032 |
| 0.2448 | 918000 | 0.1053 |
| 0.2451 | 919000 | 0.1075 |
| 0.2453 | 920000 | 0.1048 |
| 0.2456 | 921000 | 0.1007 |
| 0.2459 | 922000 | 0.11 |
| 0.2461 | 923000 | 0.1034 |
| 0.2464 | 924000 | 0.1059 |
| 0.2467 | 925000 | 0.1063 |
| 0.2469 | 926000 | 0.1051 |
| 0.2472 | 927000 | 0.1064 |
| 0.2475 | 928000 | 0.0986 |
| 0.2477 | 929000 | 0.1037 |
| 0.2480 | 930000 | 0.1093 |
| 0.2483 | 931000 | 0.102 |
| 0.2485 | 932000 | 0.0985 |
| 0.2488 | 933000 | 0.1023 |
| 0.2491 | 934000 | 0.104 |
| 0.2493 | 935000 | 0.1108 |
| 0.2496 | 936000 | 0.1061 |
| 0.2499 | 937000 | 0.1053 |
</details>
### Framework Versions
- Python: 3.12.2
- Sentence Transformers: 3.2.1
- Transformers: 4.45.2
- PyTorch: 2.5.0
- Accelerate: 1.0.1
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CustomTripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"PCR"
] |
knowledgator/gliner-llama-multitask-1B-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"information extraction",
"relation extraction",
"summarization",
"sentiment extraction",
"question-answering",
"token-classification",
"en",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"arxiv:2406.12925",
"license:apache-2.0",
"region:us"
] | 2024-12-05T09:18:15 | 2024-12-10T15:32:03 | 36 | 1 | ---
datasets:
- knowledgator/GLINER-multi-task-synthetic-data
language:
- en
library_name: gliner
license: apache-2.0
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
tags:
- NER
- information extraction
- relation extraction
- summarization
- sentiment extraction
- question-answering
---
🚀 Meet the first multi-task prompt-tunable GLiNER model 🚀
**GLiNER-Multitask** is a model designed to extract various pieces of information from plain text based on a user-provided custom prompt. This versatile model leverages a bidirectional transformer encoder, similar to BERT, which ensures both high generalization and compute efficiency despite its compact size.
The `gliner-multitask-large` variant achieves state-of-the-art performance on NER zero-shot benchmarks, demonstrating its robustness and flexibility. It excels not only in named entity recognition but also in handling various other information extraction tasks, making it a powerful tool for diverse natural language processing applications.
### Supported tasks:
* **Named Entity Recognition (NER)**: Identifies and categorizes entities such as names, organizations, dates, and other specific items in the text.
* **Relation Extraction**: Detects and classifies relationships between entities within the text.
* **Summarization**: Extract the most important sentences that summarize the input text, capturing the essential information.
* **Sentiment Extraction**: Identify parts of the text that signalize a positive, negative, or neutral sentiment;
* **Key-Phrase Extraction**: Identifies and extracts important phrases and keywords from the text.
* **Question-answering**: Finding an answer in the text given a question;
* **Open Information Extraction**: Extracts pieces of text given an open prompt from a user, for example, product description extraction;
* **Text classification**: Classifying text by matching labels specified in the prompt;
### Installation
To use this model, you must install the [GLiNER Python library](https://github.com/urchade/GLiNER):
```bash
pip install gliner
```
And install LLM2Vec package:
```bash
pip install llm2vec
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using GLiNER.from_pretrained.
**How to use for NER:**
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-llama-multitask-1B-v1.0")
text = """
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. During his career at Microsoft, Gates held the positions of chairman, chief executive officer, president and chief software architect, while also being the largest individual shareholder until May 2014.
"""
labels = ["founder", "computer", "software", "position", "date"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
If you want to use flash attention or increase sequence length, please, check the following code:
```python
from gliner import GLiNER
import torch
model = GLiNER.from_pretrained("knowledgator/gliner-llama-1B-v1.0",
_attn_implementation = 'flash_attention_2',
max_length = 2048).to('cuda:0', dtype=torch.float16)
```
### Performance:
| Model | Dataset | Precision | Recall | F1 Score | F1 Score (Decimal) |
|------------------------------------|--------------------|-----------|--------|----------|--------------------|
| knowledgator/gliner-multitask-v0.5 | CrossNER_AI | 51.00% | 51.11% | 51.05% | 0.5105 |
| | CrossNER_literature | 72.65% | 65.62% | 68.96% | 0.6896 |
| | CrossNER_music | 74.91% | 73.70% | 74.30% | 0.7430 |
| | CrossNER_politics | 78.84% | 77.71% | 78.27% | 0.7827 |
| | CrossNER_science | 69.20% | 65.48% | 67.29% | 0.6729 |
| | mit-movie | 61.29% | 52.59% | 56.60% | 0.5660 |
| | mit-restaurant | 50.65% | 38.13% | 43.51% | 0.4351 |
| | **Average** | | | | **0.6276** |
| knowledgator/gliner-multitask-v1.0 | CrossNER_AI | 67.15% | 56.10% | 61.13% | 0.6113 |
| | CrossNER_literature | 71.60% | 64.74% | 68.00% | 0.6800 |
| | CrossNER_music | 73.57% | 69.29% | 71.36% | 0.7136 |
| | CrossNER_politics | 77.54% | 76.52% | 77.03% | 0.7703 |
| | CrossNER_science | 74.54% | 66.00% | 70.01% | 0.7001 |
| | mit-movie | 61.86% | 42.02% | 50.04% | 0.5004 |
| | mit-restaurant | 58.87% | 36.67% | 45.19% | 0.4519 |
| | **Average** | | | | **0.6325** |
| knowledgator/gliner-llama-multitask-1B-v1.0 | CrossNER_AI | 63.24% | 55.60% | 59.17% | 0.5917 |
| | CrossNER_literature | 69.74% | 60.10% | 64.56% | 0.6456 |
| | CrossNER_music | 74.03% | 67.22% | 70.46% | 0.7046 |
| | CrossNER_politics | 76.96% | 71.64% | 74.20% | 0.7420 |
| | CrossNER_science | 73.79% | 63.73% | 68.39% | 0.6839 |
| | mit-movie | 56.89% | 46.70% | 51.30% | 0.5130 |
| | mit-restaurant | 48.45% | 38.13% | 42.67% | 0.4267 |
| | **Average** | | | | **0.6153** |
---
**How to use for relation extraction:**
```python
text = """
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975 to develop and sell BASIC interpreters for the Altair 8800. During his career at Microsoft, Gates held the positions of chairman, chief executive officer, president and chief software architect, while also being the largest individual shareholder until May 2014.
"""
labels = ["Microsoft <> founder", "Microsoft <> inception date", "Bill Gates <> held position"]
entities = model.predict_entities(text, labels)
for entity in entities:
print(entity["label"], "=>", entity["text"])
```
### Construct relations extraction pipeline with [utca](https://github.com/Knowledgator/utca)
First of all, we need import neccessary components of the library and initalize predictor - GLiNER model and construct pipeline that combines NER and realtions extraction:
```python
from utca.core import RenameAttribute
from utca.implementation.predictors import (
GLiNERPredictor,
GLiNERPredictorConfig
)
from utca.implementation.tasks import (
GLiNER,
GLiNERPreprocessor,
GLiNERRelationExtraction,
GLiNERRelationExtractionPreprocessor,
)
predictor = GLiNERPredictor( # Predictor manages the model that will be used by tasks
GLiNERPredictorConfig(
model_name = "knowledgator/gliner-llama-multitask-1B-v1.0", # Model to use
device = "cuda:0", # Device to use
)
)
pipe = (
GLiNER( # GLiNER task produces classified entities that will be at the "output" key.
predictor=predictor,
preprocess=GLiNERPreprocessor(threshold=0.7) # Entities threshold
)
| RenameAttribute("output", "entities") # Rename output entities from GLiNER task to use them as inputs in GLiNERRelationExtraction
| GLiNERRelationExtraction( # GLiNERRelationExtraction is used for relation extraction.
predictor=predictor,
preprocess=(
GLiNERPreprocessor(threshold=0.5) # Relations threshold
| GLiNERRelationExtractionPreprocessor()
)
)
)
```
To run pipeline we need to specify entity types and relations with their parameters:
```python
r = pipe.run({
"text": text, # Text to process
"labels": ["organisation", "founder", "position", "date"],
"relations": [{ # Relation parameters
"relation": "founder", # Relation label. Required parameter.
"pairs_filter": [("organisation", "founder")], # Optional parameter. It specifies possible members of relations by their entity labels.
"distance_threshold": 100, # Optional parameter. It specifies the max distance between spans in the text (i.e., the end of the span that is closer to the start of the text and the start of the next one).
}, {
"relation": "inception date",
"pairs_filter": [("organisation", "date")],
}, {
"relation": "held position",
"pairs_filter": [("founder", "position")],
}]
})
print(r["output"])
```
### Performance:
| Model | Dataset | Precision | Recall | F1 Score |
|:-----------------------|------------:|---------:|-----------:|-----------:|
| knowledgator/gliner-llama-multitask-1B-v1.0 | CrossRe | 0.606472 | 0.511444 | 0.554919 |
| | DocRed | 0.707483 | 0.589355 | 0.643039 |
| knowledgator/gliner-multitask-v0.5 | CrossRe | 0.585319 | 0.800176 | 0.676088 |
| | DocRed | 0.713392 | 0.772826 | 0.74192 |
|knowledgator/gliner-multitask-v1.0 | CrossRe | 0.760653 | 0.738556 | 0.749442 |
| | DocRed | 0.770644 | 0.761373 | 0.76598 |
---
**How to use for open information extraction:**
```python
prompt = """Find all positive aspects about the product:\n"""
text = """
I recently purchased the Sony WH-1000XM4 Wireless Noise-Canceling Headphones from Amazon and I must say, I'm thoroughly impressed. The package arrived in New York within 2 days, thanks to Amazon Prime's expedited shipping.
The headphones themselves are remarkable. The noise-canceling feature works like a charm in the bustling city environment, and the 30-hour battery life means I don't have to charge them every day. Connecting them to my Samsung Galaxy S21 was a breeze, and the sound quality is second to none.
I also appreciated the customer service from Amazon when I had a question about the warranty. They responded within an hour and provided all the information I needed.
However, the headphones did not come with a hard case, which was listed in the product description. I contacted Amazon, and they offered a 10% discount on my next purchase as an apology.
Overall, I'd give these headphones a 4.5/5 rating and highly recommend them to anyone looking for top-notch quality in both product and service.
"""
input_ = prompt+text
labels = ["match"]
matches = model.predict_entities(input_, labels)
for match in matches:
print(match["text"], "=>", match["score"])
```
### Performance:
*Dataset: WiRe57_343-manual-oie*
| Model | Precision | Recall | F1 Score |
|:-----------------------|------------:|---------:|-----------:|
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.9047 | 0.2794 | 0.4269 |
| knowledgator/gliner-multitask-v0.5 | 0.9278 | 0.2779 | 0.4287 |
| knowledgator/gliner-multitask-v1.0 | 0.8775 | 0.2733 | 0.4168 |
---
**How to use for question-answering:**
```python
question = "Who was the CEO of Microsoft?"
text = """
Microsoft was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. During his career at Microsoft, Gates held the positions of chairman, chief executive officer, president and chief software architect, while also being the largest individual shareholder until May 2014.
"""
labels = ["answer"]
input_ = question+text
answers = model.predict_entities(input_, labels)
for answer in answers:
print(answer["text"], "=>", answer["score"])
```
### Performance:
*Dataset: SQuAD 2.0*
| Model | Precision | Recall | F1 Score |
|:-----------------------|------------:|---------:|-----------:|
| knowledgator/gliner-llama-multitask-1B-v1.0 | 0.578296 | 0.795821 | 0.669841 |
| knowledgator/gliner-multitask-v0.5 | 0.429213 | 0.94378 | 0.590072 |
| knowledgator/gliner-multitask-v1.0 | 0.601354 | 0.874784 | 0.712745 |
---
**How to use for summarization:**
With threshold parameters, you can control how much information you want to extract.
```python
prompt = "Summarize the given text, highlighting the most important information:\n"
text = """
Several studies have reported its pharmacological activities, including anti-inflammatory, antimicrobial, and antitumoral effects.
The effect of E-anethole was studied in the osteosarcoma MG-63 cell line, and the antiproliferative activity was evaluated by an MTT assay.
It showed a GI50 value of 60.25 μM with apoptosis induction through the mitochondrial-mediated pathway. Additionally, it induced cell cycle arrest at the G0/G1 phase, up-regulated the expression of p53, caspase-3, and caspase-9, and down-regulated Bcl-xL expression.
Moreover, the antitumoral activity of anethole was assessed against oral tumor Ca9-22 cells, and the cytotoxic effects were evaluated by MTT and LDH assays.
It demonstrated a LD50 value of 8 μM, and cellular proliferation was 42.7% and 5.2% at anethole concentrations of 3 μM and 30 μM, respectively.
It was reported that it could selectively and in a dose-dependent manner decrease cell proliferation and induce apoptosis, as well as induce autophagy, decrease ROS production, and increase glutathione activity. The cytotoxic effect was mediated through NF-kB, MAP kinases, Wnt, caspase-3 and -9, and PARP1 pathways. Additionally, treatment with anethole inhibited cyclin D1 oncogene expression, increased cyclin-dependent kinase inhibitor p21WAF1, up-regulated p53 expression, and inhibited the EMT markers.
"""
labels = ["summary"]
input_ = prompt+text
threshold = 0.1
summaries = model.predict_entities(input_, labels, threshold=threshold)
for summary in summaries:
print(summary["text"], "=>", summary["score"])
```
---
**How to use for text classification:**
With threshold parameters, you can control recall and precision of text classification.
```python
prompt = "Classify text into the following classes: positive review, negative review"
text = """
"I recently purchased the Sony WH-1000XM4 Wireless Noise-Canceling Headphones from Amazon and I must say, I'm thoroughly impressed. The package arrived in New York within 2 days, thanks to Amazon Prime's expedited shipping.
"""
labels = ["match"]
input_ = prompt+text
threshold = 0.5
classes = model.predict_entities(input_, labels, threshold=threshold)
for label in classes:
print(label["text"], "=>", label["score"])
```
### Performance:
| Model Name | Dataset | Micro F1 Score |
|-----------------------|-----------|----------------|
| knowledgator/gliner-multitask-v1.0 | Emotion | 0.322 |
| | AG News | 0.7436 |
| | IMDb | 0.7907 |
| knowledgator/gliner-llama-multitask-1B-v1.0 | Emotion | 0.3475 |
| | AG News | 0.7436 |
| | IMDb | 0.7907 |
---
### Extensive NER Benchmarks:

Our multitask model demonstrates comparable performance on different zero-shot benchmarks to dedicated models to NER task (all labels were lowecased in this testing):
Here is the updated table based on the new data:
| Dataset | Precision | Recall | F1 Score | F1 Score (Decimal) |
|------------------------|-----------|--------|----------|--------------------|
| ACE 2004 | 40.45% | 18.49% | 25.38% | 0.2538 |
| ACE 2005 | 37.93% | 16.81% | 23.30% | 0.2330 |
| AnatEM | 41.08% | 29.71% | 34.48% | 0.3448 |
| Broad Tweet Corpus | 72.68% | 66.58% | 69.50% | 0.6950 |
| CoNLL 2003 | 70.34% | 68.77% | 69.54% | 0.6954 |
| CrossNER_AI | 63.24% | 55.60% | 59.17% | 0.5917 |
| CrossNER_literature | 69.74% | 60.10% | 64.56% | 0.6456 |
| CrossNER_music | 74.03% | 67.22% | 70.46% | 0.7046 |
| CrossNER_politics | 76.96% | 71.64% | 74.20% | 0.7420 |
| CrossNER_science | 73.79% | 63.73% | 68.39% | 0.6839 |
| FabNER | 35.11% | 16.55% | 22.49% | 0.2249 |
| FindVehicle | 46.76% | 27.30% | 34.47% | 0.3447 |
| GENIA_NER | 59.48% | 44.91% | 51.18% | 0.5118 |
| HarveyNER | 16.52% | 30.12% | 21.34% | 0.2134 |
| MultiNERD | 54.77% | 86.93% | 67.20% | 0.6720 |
| Ontonotes | 25.52% | 34.18% | 29.22% | 0.2922 |
| PolyglotNER | 35.54% | 65.73% | 46.13% | 0.4613 |
| TweetNER7 | 54.17% | 35.80% | 43.11% | 0.4311 |
| WikiANN en | 54.97% | 56.83% | 55.88% | 0.5588 |
| WikiNeural | 71.80% | 85.37% | 78.00% | 0.7800 |
| bc2gm | 51.17% | 48.71% | 49.91% | 0.4991 |
| bc4chemd | 50.76% | 68.69% | 58.38% | 0.5838 |
| bc5cdr | 75.05% | 67.16% | 70.89% | 0.7089 |
| mit-movie | 56.89% | 46.70% | 51.30% | 0.5130 |
| mit-restaurant | 48.45% | 38.13% | 42.67% | 0.4267 |
| ncbi | 66.27% | 57.47% | 61.56% | 0.6156 |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG).
### Citation:
```
@misc{stepanov2024gliner,
title={GLiNER multi-task: Generalist Lightweight Model for Various Information Extraction Tasks},
author={Ihor Stepanov and Mykhailo Shtopko},
year={2024},
eprint={2406.12925},
archivePrefix={arXiv},
primaryClass={id='cs.LG' full_name='Machine Learning' is_active=True alt_name=None in_archive='cs' is_general=False description='Papers on all aspects of machine learning research (supervised, unsupervised, reinforcement learning, bandit problems, and so on) including also robustness, explanation, fairness, and methodology. cs.LG is also an appropriate primary category for applications of machine learning methods.'}
}
``` | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | [
"ANATEM",
"BC5CDR"
] |
medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2 | medspaner | null | [
"transformers",
"safetensors",
"bert",
"es",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | 2024-12-12T16:36:52 | 2025-01-10T17:41:12 | 36 | 0 | ---
base_model:
- dccuchile/bert-base-spanish-wwm-uncased
language:
- es
library_name: transformers
license: cc-by-nc-4.0
metrics:
- accuracy
- precision
- recall
- f1
---
# Model Card for dccuchile-bert-base-spanish-wwm-uncased-re-ct
This relation extraction model extracts intervention-associated relationships, temporal relations, negation/speculation and others relevant
for clinical trials.
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.868 (±0.009)
- Recall: 0.857 (±0.006)
- F1: 0.862 (±0.006)
- Accuracy: 0.907 (±0.003)
## Model description
This model adapts the pre-trained model [bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased).
It is fine-tuned to conduct relation extraction on Spanish texts about clinical trials.
The model is fine-tuned on the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
If you use this model, please, cite as follows:
```
@article{campillosetal2025,
title = {{Benchmarking Transformer Models for Relation Extraction and Concept Normalization in a Clinical Trials Corpus}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Zakhir-Puig, Sof{\'i}a and Heras-Vicente, J{\'o}nathan},
journal = {(Under review)},
year={2025}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) version 3 (annotated with semantic relationships).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
The CT-EBM-ES resource (version 1) can be cited as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: AdamW
- weight decay: 1e-2
- lr_scheduler_type: linear
- num_epochs: 5 epochs.
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.877 (±0.009) | 0.857 (±0.006) | 0.862 (±0.006) | 0.907 (±0.003) |
**Results per class (test set; best model)**
| Class | Precision | Recall | F1 | Support |
|:---------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Experiences | 0.96 | 0.98 | 0.97 | 2003 |
| Has_Age | 0.93 | 0.82 | 0.87 | 152 |
| Has_Dose_or_Strength | 0.79 | 0.83 | 0.81 | 189 |
| Has_Drug_Form | 0.91 | 0.80 | 0.85 | 64 |
| Has_Duration_or_Interval | 0.79 | 0.82 | 0.81 | 365 |
| Has_Frequency | 0.84 | 0.75 | 0.79 | 84 |
| Has_Quantifier_or_Qualifier | 0.89 | 0.89 | 0.89 | 1040 |
| Has_Result_or_Value | 0.91 | 0.91 | 0.91 | 384 |
| Has_Route_or_Mode | 0.89 | 0.83 | 0.86 | 221 |
| Has_Time_Data | 0.89 | 0.83 | 0.86 | 589 |
| Location_of | 0.94 | 0.97 | 0.96 | 1119 |
| Used_for | 0.86 | 0.88 | 0.87 | 731 |
### Usage
To use this model you need to install the datasets library.
```shell
pip install datasets
```
Then you can define the necessary functions and classes to load the model.
```python
from transformers import (
BertTokenizerFast, BertModel, BertForPreTraining, BertConfig, BertPreTrainedModel,
DataCollatorWithPadding,AutoTokenizer
)
from transformers.modeling_outputs import SequenceClassifierOutput
import torch
import torch.nn as nn
from datasets import Dataset
from torch.utils.data import DataLoader
class BertForRelationExtraction(BertPreTrainedModel):
def __init__(self, config, num_labels):
super(BertForRelationExtraction, self).__init__(config)
self.num_labels = num_labels
# body
self.bert = BertModel(config)
# head
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.layer_norm = nn.LayerNorm(config.hidden_size * 2)
self.linear = nn.Linear(config.hidden_size * 2, self.num_labels)
self.init_weights()
def forward(self, input_ids, token_type_ids, attention_mask,
span_idxs, labels=None):
outputs = (
self.bert(input_ids, token_type_ids=token_type_ids,
attention_mask=attention_mask,
output_hidden_states=False)
.last_hidden_state)
sub_maxpool, obj_maxpool = [], []
for bid in range(outputs.size(0)):
# span includes entity markers, maxpool across span
sub_span = torch.max(outputs[bid, span_idxs[bid, 0]:span_idxs[bid, 1]+1, :],
dim=0, keepdim=True).values
obj_span = torch.max(outputs[bid, span_idxs[bid, 2]:span_idxs[bid, 3]+1, :],
dim=0, keepdim=True).values
sub_maxpool.append(sub_span)
obj_maxpool.append(obj_span)
sub_emb = torch.cat(sub_maxpool, dim=0)
obj_emb = torch.cat(obj_maxpool, dim=0)
rel_input = torch.cat((sub_emb, obj_emb), dim=-1)
rel_input = self.layer_norm(rel_input)
rel_input = self.dropout(rel_input)
logits = self.linear(rel_input)
if labels is not None:
loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(loss, logits)
else:
return SequenceClassifierOutput(None, logits)
id2label = {0: 'Experiences',
1: 'Has_Age',
2: 'Has_Dose_or_Strength',
3: 'Has_Duration_or_Interval',
4: 'Has_Frequency',
5: 'Has_Route_or_Mode',
6: 'Location_of',
7: 'Used_for'}
def encode_data_inference(token_list,tokenizer):
tokenized_inputs = tokenizer(token_list,
is_split_into_words=True,
truncation=True)
span_idxs = []
for input_id in tokenized_inputs.input_ids:
tokens = tokenizer.convert_ids_to_tokens(input_id)
span_idxs.append([
[idx for idx, token in enumerate(tokens) if token.startswith("<S:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("</S:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("<O:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("</O:")][0]
])
tokenized_inputs["span_idxs"] = span_idxs
# tokenized_inputs["labels"] = [label2id[label] for label in examples["label"]]
return tokenized_inputs
def predict_example(example,model,tokenizer):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
collate_fn = DataCollatorWithPadding(tokenizer, padding="longest", return_tensors="pt")
encoded_data = encode_data_inference(example,tokenizer)
inferenceds = Dataset.from_dict(encoded_data)
inference_dl = DataLoader(inferenceds,
shuffle=False,
# sampler=SubsetRandomSampler(np.random.randint(0, encoded_nyt_dataset["test"].num_rows, 100).tolist()),
batch_size=1,
collate_fn=collate_fn)
for batch in inference_dl:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
predictions = torch.argmax(outputs.logits, dim=-1).cpu().numpy()
return [id2label[p] for p in predictions]
```
Finally, you can use it to make predictions:
```python
example = [['Título',
'público:',
'Estudio',
'multicéntrico,',
'aleatorizado,',
'doble',
'ciego,',
'controlado',
'con',
'placebo',
'del',
'anticuerpo',
'monoclonal',
'humano',
'anti-TNF',
'<O:CHE>',
'Adalimumab',
'</O:CHE>',
'en',
'<S:LIV>',
'sujetos',
'pediátricos',
'</S:LIV>',
'con',
'colitis',
'ulcerosa',
'moderada',
'o',
'grave']]
model = BertForRelationExtraction.from_pretrained("medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2",8)
tokenizer = AutoTokenizer.from_pretrained("medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2")
predict_example(example,model,tokenizer)
```
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1 | [
"RELATION_EXTRACTION"
] | [
"SCIELO"
] |
Free-Law-Project/modernbert-embed-base_finetune_8192 | Free-Law-Project | sentence-similarity | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:351",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:nomic-ai/modernbert-embed-base",
"base_model:finetune:nomic-ai/modernbert-embed-base",
"license:cc0-1.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-03-05T19:07:29 | 2025-03-18T00:06:11 | 36 | 0 | ---
base_model: nomic-ai/modernbert-embed-base
language:
- en
library_name: sentence-transformers
license: cc0-1.0
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:351
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Rose, J. Appeal from a judgment of the Supreme Court ( Malone,
Jr., J. ), entered June 13, 2001 in Albany County, which partially granted petitioner
’ s application, in a proceeding pursuant to CPLR article 78, to review a determination
of the Department of Health reducing a component of its Medicaid reimbursement
rate. Petitioner, a residential health care facility operating in Chemung County,
commenced this proceeding seeking, inter alia, annulment of respondents ’ determination
adjusting its case mix index based on misclassifications revealed in an audit
of patient review instrument data conducted by the Department of Health ( hereinafter
Department ) and recalculating petitioner ’ s Medicaid reimbursement rate for
the period beginning April 1, 1999. 1 Specifically, the Department found that
petitioner had improperly classified 28 patients as receiving restorative therapy
rather than maintenance therapy, reduced petitioner ’ s reimbursement rate accordingly,
and directed that future patient assessments be performed by an independent auditor.
Petitioner argued that the Department ’ s nurse - auditors had improperly “ second
- guessed ” the physician - prescribed rehabilitative care plans for its patients
by denying reimbursement even though petitioner had provided restorative therapy
as prescribed. Petitioner also argued that the Department acted arbitrarily and
capriciously in using a fixed general rule precluding reimbursement for restorative
therapy unless it produces actual improvement ( hereinafter actual improvement
standard ) that has not been properly adopted and filed as a formal regulation.
Supreme Court accepted this latter argument, granted the petition and remitted
the matter to respondents to review the patient classifications without re * 773course
to the actual improvement standard. Respondents now appeal. 2 Respondents argue
that Supreme Court ’ s ruling was improper because the Department ’ s actual improvement
standard is based on a rational interpretation of an existing regulation and,
thus, is not an unfiled rule. Petitioner reiterates its contentions that the denial
of reimbursement for restorative therapy provided to its patients was improper
both because it was based on an auditor ’ s after - the - fact medical judgment
and on an unfiled rule requiring actual improvement. Since the Department ’ s
auditors were not required to defer to the judgments of petitioner ’ s physicians
and therapists in retrospectively reviewing what patient care qualified for Medicaid
reimbursement ( see, Concourse Rehabilitation & Nursing Ctr. v DeBuono, US Dist
Ct, SD NY, June 11, 1988, Conti, J., slip op at 12, appeal dismissed 179 F3d 38
), we find no merit in petitioner ’ s first contention. Rather, as considered
by Supreme Court and as presented on appeal, the central issue is whether respondents
’ actual improvement standard for the restorative therapy classification is a
rational interpretation of an existing regulation or a new unfiled rule being
applied in violation of the State Administrative Procedure Act. Under 10 NYCRR
86 - 2. 30 ( i ) ( Instructions : Patient Review Instrument [ PRI ] [ 27 ] ),
a restorative therapy classification is proper where “ [ t ] here is positive
potential for improved functional status within a short and predictable period
of time ” and the “ [ t ] herapy plan of care and progress notes * * * support
that [ the ] patient has this potential / is improving. ” In its clarification
sheet provided to nursing homes, the Department explains that the phrase “ has
this potential / is improving ” means that the patient must demonstrate both the
potential for functional improvement and the actual occurrence of such improvement
in order to qualify for the restorative therapy classification. On this appeal,
the Department acknowledges that it has a fixed policy of applying the quoted
regulation in this manner. Contrary to Supreme Court ’ s conclusion, we find that
the Department ’ s clarification sheet is interpretive, that its interpretation
has a rational basis and that, therefore, the resulting actual improvement standard
does not constitute an improper unfiled rule ( see, State Administrative Procedure
Act * 774 § 102 [ 2 ] [ b ] [ iv ] ; see also, Matter of Dubb Enters. v New York
State Liq. Auth., 187 AD2d 831, 833 ; cf, Matter of Cordero v Corbisiero, 80 NY2d
771, 772 - 773 ; Matter of Stuyvesant Polyclinic v Axelrod, 117 AD2d 99, 101 ).
Generally, “ courts will defer to an agency ’ s interpretation of its own regulations
if not irrational ” ( Matter of Silver Lake Nursing Home v Axelrod, 156 AD2d 789,
790 ; see, Matter of Marzec v DeBuono, 95 NY2d 262, 266 ; Matter of County of
Rockland v Axelrod, 157 AD2d 960, 961 ), and the agency ’ s interpretation is
not rendered irrational simply because the regulation may be susceptible to a
different rational interpretation ( Matter of Jennings v New York State Off. of
Mental Health, 90 NY2d 227, 239 ). Petitioner focuses on the role played by the
forward slash or virgule in the phrase “ patient has this potential / is improving.
” Arguing that common usage reflects that the virgule merely means “ or, ” petitioner
concludes that the Department ’ s requirements of potential improvement and actual
improvement contradicts the language of the regulation. Our view of the use of
the virgule in the regulation at issue here leads to a contrary conclusion. “
Virgule ” has been defined as a symbol used to denote, inter alia, “ or ” or “
and or ” ( see, Webster ’ s Third New International Dictionary 2555 [ unabridged
1986 ], cross - referencing “ diagonal, ” Webster ’ s Third New International
Dictionary 622 [ unabridged 1986 ] ). Even defined in this way, the virgule allows
for usage as “ and, ” resulting in no contradiction when both alternatives apply.
However, “ virgule ” is more comprehensively defined as “ a short oblique stroke
( / ) between two words indicating that whichever is appropriate may be chosen
to complete the sense of the text in which they occur ” ( Random House Dictionary
of the English Language 2125 [ unabridged 2d ed 1993 ] ). This definition is particularly
apt here because the phrase “ patient has this potential / is improving ” follows,
and is parallel to, the preceding phrase “ therapy plan of care and progress notes.
” To interpret the entire regulation, rather than parse the latter phrase only,
it is rational to view the virgule as indicating that the reader should use the
words that most appropriately complete the sense of the whole sentence. As the
earlier phrase has two concepts with one anticipating future progress and the
other reporting actual progress, the phrase “ patient has this potential / is
improving ” provides the choice between potential and actual circumstances depending
upon whether a plan for a patient or a patient ’ s progress is being considered.
Interpreted this way, the regulation requires a therapy plan to set forth the
patient ’ s potential for improvement and the patient ’ s prog * 775ress notes
to reflect actual improvement in order to qualify as restorative. Such an interpretation
is also consistent with the overall regulatory scheme, for it seeks to assure
that restorative therapy is utilized when it potentially will result in patient
improvement while excluding reimbursement if the expected improvement is not achieved
( see, Concourse Rehabilitation & Nursing Ctr. v Whalen, 249 F3d 136, 143 - 146
). 3 Given the parallel structure of the pertinent phrases of the regulation and
the recognized use of the virgule to implement such parallelism, we find no conflict
between the cited regulation and respondents ’ interpretation, and conclude that
their interpretation has a rational basis. Finally, petitioner ’ s contention
that the issue is not judicially reviewable because the Department, through its
auditors, did not expressly rely on the actual improvement standard in reclassifying
petitioner ’ s patients is belied by the petition itself, which narrowly framed
the issue by asserting that the Department ’ s actual improvement standard had
resulted in the reclassifications. Accordingly, it was error to grant the petition
and require further assessment by the Department. Crew III, J. P., Peters, Mugglin
and Lahtinen, JJ., concur. Ordered that the judgment is modified, on the law,
without costs, by reversing so much thereof as partially granted the petition
; petition denied in its entirety ; and, as so modified, affirmed. . We refer
the reader to Concourse Rehabilitation & Nursing Ctr. v Whalen ( 249 F3d 136 )
for an overview of the Medicaid program and Matter of Teresian House Nursing Home
Co. v Chassin ( 218 AD2d 250 ) for a description of its process for auditing patient
assessments. . Since the judgment issued by Supreme Court is nonimal and, thus,
not appealable as of right ( see, CPLR 5701 [ b ] [ 1 ] ; [ c ] ), we exercise
our authority to grant permission to appeal sua sponte given the importance of
the issue presented ( see, Matter of Gane v Ambach, 135 AD2d 1013, 1013 - 1014
). . The Health Care Financing Agency ’ s “ Carriers Manual ” provides as follows
: “ Restorative Therapy. To constitute physical therapy a service must, among
other things, be reasonable and necessary to the treatment of the individual ’
s illness. * * * In addition, there must be an expectation that the patient ’
s condition will improve significantly in a reasonable ( and generally predictable
) period of time. However, if at any point in the treatment of an illness, it
is determined that the expectations will not materialize, the services will no
longer be considered reasonable and necessary ; and they, therefore, should be
excluded from coverage under § 1862 ( a ) ( 1 ) of the Social Security Act [ 42
USC § 1862 ( a ) ( 1 ) ] ” ( Carriers Manual, part 3, ch II, § 2210. 1 [ emphasis
supplied ] ).'
sentences:
- What are the legal standards for proving legal malpractice in New York?
- What are the criteria for granting a motion to dismiss in a criminal trial?
- What determines Medicaid reimbursement eligibility for restorative therapy in
New York?
- source_sentence: 'Bacon, J. The grounds on which the plaintiffs ask the relief to
which they suppose themselves entitled are two fold. First, they allege that the
proceedings of the defendants are calculated to do incalculable injury to the
farms of the plaintiffs, by cutting off and drying up their springs, and destroying
the growth of their young timber, and that these proceedings are conducted in
bad faith and with the intent to injure the plaintiffs, and benefit the lands
of other parties not contributing to the expense of the work ; and secondly, they
insist that the act under which the defendants are assuming to perform the work
in question is unconstitutional and void, as depriving the plaintiffs of their
property, not for any public use, and without providing them a just compensation
therefor. I shall spend no time upon the first branch of the plaintiffs ’ case,
because there is no evidence whatever before me tending to show that the defendants
are acting in bad faith ; and although there is some diversity of opinion whether
the mode adopted by the defendants is the one best calculated to secure the result
at which they are aiming, and whether the manner of its execution is the most
judicious, yet this may be deemed at best a balanced question, on the evidence.
Even if they err in judgment, a court would hardly be justified in interfering
by the summary process of injunction to restrain their proceedings. Unless the
defendants are violating the plain and manifiest intent and object of the statute
under which they are acting, or are proceeding in bad faith, the court should
not interpose its a, u * 168thority to suspend the work. In either aspect, I see
no sufficient ground, as disclosed by the evidence, to entitle the plaintiff to
the relief they ask under the first head of their complaint. The more important
question, as it was the one most elaborately and ably argued by the counsel on
both sides, respects the inquiry whether the act of April 16th, 1854, under which
the defendants are carrying on the work of draining, the Rome swamp, is not a
violation of the constitution, and therefore void. The plaintiffs ’ counsel insists
that the act is a violation of the constitutional inhibition against taking private
property, because, ( 1. ) It is not taken for a public use ; and ( 2. ) Because
no just compensation is provided for the parties whose property is taken. I. That
the property of A. cannot be taken and appropriated to the use of B., however
beneficial the change may bej and that the land of private citizens cannot be
occupied by the government or any subordinate functionary clothed with legislative
authority, under the pretense or the claim of improving it for the benefit of
the occupant or his neighbors, requires no argument to demonstrate. It is by no
means easy, however, to define the precise boundaries which limit the right to
appropriate private property for public use ; or, in other words, to determine
when the use shall be deemed public, and when not. It is insisted by the counsel
for the plaintiffs that the purposes for which the property is taken in this case
are not public, because the benefit is limited to, - and the expense assessed
upon, a few individuals. But how are we to determine the number to whom the benefit
will be confined? In the case of draining an extensive swamp, we can readily conceive
that the public health may be favorably affedted, throughout a wide region, within
and bordering upon the district where the work is carried on, and it surely is
for the public benefit that a large tract of land should be reclaimed from the
condition of a useless morass, and added to the agricultural resources of the
state. But the question returns upon us, who is to judge of the degree of necessity
which exists, and which alone will warrant the action of the legislative authority
in determining that private property may * 169be taken for public uses? It is
now well settled, if there ever has been any well founded doubt upon the proposition,
that the right of “ eminent domain ” remains in the government, or in the aggregate
body of the people in their sovereign capacity, and they have the right to resume
the possession in the manner directed by the organic and the statute laws of the
state, whenever the public interest requires it. The answer to the question I
have proposed, is perhaps no where better given than by the late chancéllor of
this state in the leading case of Beekman v. The Saratoga & Schenectady Rail Road
Co. ( 3 Paige, 73. ) “ If the public interest can in any way be promoted by the
taking of private property, it must rest in the wisdom of the legislature to determine
whether the benefit to the public will be of sufficient importance to render it
expedient for them to exercise the right of eminent domain, and to authorize an
interference with the private rights of individuals for that purpose. ” He adds,
“ upon this principle, not only the agents of government, but also individuals
and corporate bodies, have been authorized to take private property for the purpose
of making public highways, turnpike roads and canals, of erecting and constructing
wharves and basins, of establishing ferries, of draining sioamps and marshes,
and of bringing water to cities and villages. In all such cases the object of
the legislative '' grant of power is the public benefit derived from the contemplated
improvement. ” The use and benefit is not required to be universal, nor, in the
largest sense, even general. If it is confined to a specific district, it may
still be public. If some parties are more benefited than others, this forms no
objection to the use, if the public interest and convenience are thereby subserved.
Isolated and individual action will rarely secure the public and general result
which the legislative power is invoked to accomplish ; and, in view of all the
facts in this case, it is to be assumed that the legislature adjudged that the
public necessity or utility justified the exercise of the right of resumption,
and that the exigency existed which authorized the act in question. I do not say
that a case may not exist of such palpable and gross invasion of private rights,
unjustified by any semblance of pub - * 170lie necessity, that it would he the
duty of the courts to interfere for the protection of such rights, by pronouncing
the act a violation of the salutary principle which was designed to hold the legislative
authority in check. But the case must be very clear to warrant this interference.
On this part of the case, it is pertinent also to remark, that for the last fifty
years, at least, the legislature has exercised the power in question here, by
passing laws from time to time, authorizing, in various forms, the draining of
swamps and marshes, and the reclaiming of submerged lands. More than twenty such
acts will be found in the session laws of the state, commencing as early as 1804,
and continuing at various intervals down to the very last session of the legislature,
when the act in question was passed. This course of legislation is by no means
conclusive when a constitutional question arises, which may never have been agitated
in the courts, - under any of those acts. And we have been admonished by more
than one decision that no length of time, in which a course of legislation has
been continued, will protect any law from the condemnation of the judicial tribunals,
when its conflict with the constitution is brought distinctly to the test. ( See
opinion of Bronson, J. in Taylor v. Porter, 4 Hill, 140. ) While, therefore, it
is not affirmed that. these acts may be appealed to as decisive of the power of
the legislature to pass them, and that they are not within the constitutional
objection we have been considering, they nevertheless do lend some strength to
the argument that a power so long exercised, in such diversified forms and various
localities, may be deemed settled, as applied to the subject we are now considering.
Looking then at the principle which lies at the foundation of the right of the
government to take private property for public use by an appropriate act of legislation,
and the end which in this case may be fairly deemed the object and intent of the
act, I shall - find no difficulty in maintaining it as the lawful exercise of
the right of eminent domain, and holding that the taking of the lands of these
plaintiffs, so far as it was necessary to enter upon and appropriate them for
the purpose intended in this case, was and is a lawful taking of the same for
a public use. • * 171II. But there is an important condition connected with the
exercise of this power on the part of the government to take private property
for the public use ; and that is, the necessity of providing a just compensation
to the parties whose property shall be thus appropriated. This condition is fundamental
and imperative, and can only be satisfied by making such a provision as shall
be in truth “ just, ” or, in other words, adequate and compensatory. “ The principle,
” says Oh. J. Savage, ( Matter of Canal street, 11 Wend. 154, ) “ that private
property shall not be taken for public use without just compensation is found
in the constitution and laws of this state, and has its foundation in those elementary
principles of equity and justice which lie at the root of the social compact.
” And this provision must be made cotemporaneously with, and as a part of, the
act which authorizes the appropriation : For, in the language of Oh. Walworth,
( 18 Wend. 17, ) “ Before the legislature can authorize the agents of the state
and others to enter upon and occupy, or destroy or materially injure, the private
property of an individual, except in case of actual necessity " which will not
admit of delay, an adequate and certain remedy must be provided, whereby the owner
of such property may compel the payment of his damages or compensation, and he
is not bound to trust to the justice of the government to make provision for such
compensation by future legislation. ” And Kent, ( 2 Com. 389, ) recognizes the
same doctrine when he says, “ a provision for compensation is a necessary attendant
on the due and constitutional exercise of the power given to deprive an individual
of his property without his consent, and the principle is founded in natural equity,
and is laid down by jurists, as an acknowledged principle of universal law. ”
Bearing these principles in mind, and that by the term “ just compensation, ”
as used in the constitution, is to be understood “ a fair equivalent in money
— a quid pro quo, a recompense in value for the property taken, ” ( Per Mason,
senator, 18 Wend. 35 ; ) and remembering also that when private " property is
taken for public use by right of eminent domain, it is taken not as the owner
’ s share of contribution to a public burthen, but as so much * 172beyond bis
share — let us see whether the act of the legislature, under which the proceedings
of the defendants in this case have been taken, fulfills the constitutional requirement
on that subject. By the 3d section of the act of April 17th, ( Session Laws of
1854, p. 1000, ) it is made the duty of the commissioners to assess the costs
and expenses of the survey and the cutting of the ditches, and to apportion the
amount among the several owners of lands to be drained, according to the number
of acres respectively owned by each. This provision, it will be seen, devolves
the whole expenses upon the parties owning the lands to be drained ; and that
not in the ratio of relative benefit, but simply upon a property basis, and by
an equal assessment upon every acre throughout the swamp. The rule is highly objectionable
in respect to the mode of providing for the expenses, but is probably within the
scope of the legislative discretion as one form of the exercise of the taxing
power. These burthens never can be very equally adjusted, and there is no glaring
injustice in requiring those persons to pay the expenses, who are supposed to
receive an equivalent in the enhanced value of their own adjacent property. On
examining the act further, to ascertain what provision has been made for the damages
or compensation to be made to the owner whose lands are entered upon and taken,
we find the 11th section declares, that for any damages done to the owner or owners
of such lands, ( fee., the commissioners shall make just compensation ; and after
providing for their appraisal in the proper mode, it is declared that such damages,
and the costs of assessment and the per diem > of the commissioners, shall be
duly certified and “ assessed and collected as part of the expenses of the drainage
authorized by this act. ” The effect of the provision is to make the damages or
compensation to be collected and payable precisely as the expenses are, to wit,
by assessing the same upon the owners of the land, according to the number of
acres owned by each. But is this the “ just compensation ” contemplated and required
by the constitution? Most obviously, it seems to me, it is not. The taking of
land necessary for the work, and the dispossession of the owner ’ s right and
title thereto, is only to be vindicated on the ground '' * 173that it is required
for a public use. If the improvement is required for the public benefit, upon
what principle can the public benefited by the appropriation, be exempted from
their proper contribution to the just indemnification of the parties whose property
has been taken? The land appropriated is not the owner ’ s share of a contribution
to a public burthen, but is so much above and beyond his share. He should be compensated,
therefore, and the compensation should be made in good part, if not entirely,
by those who are benefited by the work accomplished, either in the increased salubrity
of the surrounding region, or the enhanced value of the lands which lie in the
immediate neighborhood. But by the operation of this section, the owner not only
loses his land, but is compelled to pa. y a portion of the very damages he has
sustained by such loss and the other consequential injuries he may have suffered
thereby. The money which is supposed to satisfy the damages suffered by the owner
may, in one sense, be said to find its way into one of the pockets of the proprietor
; but to accomplish that trick of legal legerdemain, it must first be taken out
of the other. Is this the “ just compensation ” the constitution contemplates?
Does it practically do any more than “ Keep the word of promise to the ear, To
break it to the hope. ” Besides, the burthen will of necessity be very unequally
apportioned among those who are doomed to bear it. It is incredible that every
owner of land in the swamp will suffer equal injury and receive equal benefit
from the work in question ; and the testimony in this case shows that such is
not the fact. A. is the owner of 20 acres, which is a mere morass, having no available
springs upon it, and no growth of timber which the progress of the work uproots
and destroys. B., on an adjoining lot, has. both springs indispensable for the
uses to which he is applying his already partially reclaimed land and a growth
of young timber, very valuable for farming purposes. And yet, under the law as
it stands, B. pays precisely at the same rate, as a compensation towards the damages
he has suffered, that A. does, who has not only suffered no injury, but has been
greatly benefited by * 174the appropriation of the land and the execution of the
work. This clearly is no just compensation, but a most inequitable distribution
of the burthens, which ought to be in some proximate proportion to the benefits.
It is urged by the counsel of the defendants that the act in question follows
the precedents of prior legislation on the same subject, and is formed on the
model of various acts which have authorized similar works. I have looked through
most of the acts on this subject in our session laws for many years, and it is
true that in " a great majority of cases no provision whatever has been mad §
for ascertaining or paying the compensation required to be made. These laws have
been probably acquiesced in by the parties who were interested in or affected
by them, and no question has been made in the courts, as far as I am aware, respecting
their constitutional validity. If there had been, I am unable to see how they
could have escaped judicial condemnation. But this has not been the invariable
course of legislation on this subject ; for on examining the act of April, 1816,
for draining the great marsh on the Caneseraga creek, I find an express provision,
that in case any person shall suffer injury or damage by occasion of the canal
and drainage of the land, his damages shall be ascertained by the commissioners,
and assessed on the proprietor of such lands “ as would in any wise be benefited
or made more valuable, by reason of the canal ” to be cut for the purpose of draining
the said swamp. And the same provision was made in reference to the expenses,
which were to be assessed in like manner, “ having reference to the benefit to
be received by each of the proprietors. ” So also in the act of April, 1825, for
draining the Cayuga marshes, it was made the duty of the commissioners, when the
work should be completed, to prepare an assessment roll and valuation of the land
reclaimed, and all other lands which in their opinion shall have been increased
in value by the lowering of the waters of the marsh, and assess a tax to pay for
the work, “ in an equal and just measure according to the valuation in the assessment
roll, ” adequate to meet the expenses of the work. And a substantially similar
provision is contained in the act of * 175February, 1822, for lowering Onondaga
Lake, and draining the marsh lands in the town of Salina. [ Oneida Special Term,
December 4, 1854. Bacon, Justice. ] These acts contain the proper provisions,
and are, it seems to me, founded on the true principle which ought to govern legislation
on the subject of appropriating private property for public uses. Nothing could
have been easier than to have inserted in the act we have been considering, a
section containing a provision similar to the one found in these acts, to which
I have referred, and thus have secured all the benefits which are expected to,
and doubtless. will, flow from a judicious discharge of the duties devolved upon
these defendants, while it preserved all the constitutional guaranties which have
been thrown around the rights of the private citizen. Future legislation may possibly
’ -, even now, remedy this omission, giving validity to what has already been
done, but providing for that just indemnity and compensation to which it shall
be found the parties are ultimately entitled. But whether this be so or not, the
duty of the courts in a case where their interposition is invoked to stay proceedings
under a law which violates a glain _ constitutional provision, is clear and imperative,
and must be performed. , The plaintiffs are accordingly entitled to the relief
demanded in the complaint, restraining the defendants from further proceedings
under the act in question. But as the defendants have been charged with a public
duty, under the apparent sanction of an act of the legislature, and have acted
in entire good faith, the judgment must be without costs against them.'
sentences:
- What legal principles govern the interpretation of insurance policy conditions
for claims and notice requirements?
- What are the requirements for obtaining a patent for an invention?
- What are the legal principles for determining public use and just compensation
under eminent domain?
- source_sentence: Order affirmed, with ten dollars * 928costs and disbursements.
All concurred, except Kruse, J., who dissented upon the ground that the order
for examination appears upon its face to have been made under article 1 of title
3 of chapter 9 of the Code of Civil Procedure. Such an order can only be made
by a judge and not by the court. If the. order was incorrectly entered it should
have been resettled before the judge who presided at the court that made it.
sentences:
- When can a court issue a writ of prohibition to stop legal proceedings in a lower
court?
- What are the tax implications of a property sale in the United States?
- What happens if a court order is improperly entered under civil procedure laws?
- source_sentence: Loring, J. The defendant operates a private hospital for gain.
The plaintiff went there to undergo an operation. She testified that " her physician
made the arrangements for [ Tier ] entering into the hospital. . . . That she
paid to the hospital $ 15 a week for attendance and $ 10 for the use of the operating
room. ” The operation was performed by a surgeon not connected with the defendant
hospital. The plaintiff was etherized by her family physician and he was not connected
with the defendant. In addition to the surgeon and the family physician two of
the defendant ’ s nurses were present at the operation. When the plaintiff was
on the operating table before she went under ether she had two rings on her hands.
After the operation and while the plaintiff was still under the effects of ether
she was carried from the operating room to her own room in the hospital by “ one
of the doctors assisted by the nurses. ” When the plaintiff came out of the ether
she noticed that the more valuable of the two rings ( a ring which “ would not
come off without assistance ” ) was missing. At the trial the plaintiff put the
surgeon and the family physician on the witness stand. Each of them testified
that he did not take the ring. The defendant put on the stand the superintendent
of the hospital, one of the two operating nurses and the plaintiff ’ s day nurse.
Each of them testified that she did not take the ring. The operating nurse who
was put upon the witness stand testified that the other operating nurse was in
California “ the last time she heard from ” her. The plaintiff made many requests
for rulings and now insists upon the first, fifth, eleventh and twelfth set forth
above. These were refused and an exception taken. The judge instructed the jury
that to recover the plaintiff must prove that she was in the exercise of due care
and that the defendant was negligent. An exception was taken to this ruling. The
case is here on these exceptions. * 136On the evidence the jury were warranted
in finding that the ring was forcibly removed from the plaintiff ’ s hand by the
operating nurse who when last heard from was in California. If the absent nurse
did steal the ring it is plain that the defendant is not liable on the ground
that in stealing the ring the nurse was acting within the scope of her employment
as a servant of the defendant. The first request for ruling therefore was properly
refused. If the plaintiff had stood in the relation of a stranger to the defendant
there would have been no error in the. trial. But the plaintiff did not stand
to the defendant in the relation of a stranger. It is apparent from, the bill
of exceptions that the case was not tried on the footing that the rights of the
plaintiff in this action depended upon the contract made by her with the defendant.
For this reason the terms of this contract do not appear as fully as they otherwise
would have done. But from what does appear in the bill of exceptions the presiding
judge was wrong in telling the jury that the defendant ’ s liability depended
upon the plaintiff proving that it was negligent. . Under the contract entered
into by the defendant corporation it was its duty not only ( 1 ) to give the plaintiff
a room in the hospital before and after the operation and ( 2 ) to give her surgeon
and family physician the use of the operating room for the operatian, but also
( 3 ) to give to the plaintiff the services of such nurses as were necessary for
her care before, after and during the operatian. It expressly appeared at the
trial that “ she [ the plaintiff ] paid to the hospital $ 15 a week for attendance.
” The services of the nurses which under the contract the defendant was bound
to furnish the plaintiff included the services of nurses while she was unconscious
from the effects of the ether, a condition which was a necessary part of the operation.
And the question we have to decide is whether there was a violation of duty on
the part of the defendant under this contract if the operating nurse in question
stole the ring by forcibly pulling it off the plaintiff ’ s finger while she was
under the effects of ether, or whether on the facts appearing at the trial the
jury could have so found. We are of opinion that the jury could have so found.
If for example a stranger had burst into the operating room, attacked the plaintiff
and done her bodily harm or had attacked * 137the plaintiff while the nurses were
carrying her from the operating room to her own room and the defendant ’ s nurses
had stood by and done nothing to protect the plaintiff from those attacks, it
is plain in our opinion that there would have been a violation of the duty owed
by the defendant under its contract with the plaintiff. It is equally plain in
our opinion that the duty owed by the defendant under its contract with the plaintiff
extended to the care of the rings on her fingers while she was unconscious from
the effects of ether as well as to the security of her person. And finally it
is equally plain in our opinion that there is as much a violation of the duty
owed by the defendant under the contract where the attack upon the person or larceny
of the ring is committed by one of the defendant ’ s own nurses ( whose duty it
was to protect the plaintiff ) as well as in the case where the attack is made
by a stranger and the nurses do not undertake to protect her from the attack.
In its legal aspects the case is governed by the decision in Bryant v. Rich, 106
Mass. 180. In that case a dispute arose between a passenger on one of the defendant
’ s steamers and one of the defendant ’ s waiters as to whether the passenger
had paid for his supper. The plaintiff, a cousin of the passenger in question,
made a suggestion to which no exception could have been taken. Whereupon not only
the waiter in question but the head steward and the other waiters knocked down
the plaintiff and beat him. It was for this assault and battery that the action
in Bryant v. Rich was brought. The presiding judge ruled ( in accordance with
a request made by the defendant ) that “ there is no evidence that the steward
and waiters, in assaulting the plaintiff, were acting within the scope of any
authority, or in the discharge of any duty, imposed upon them by the defendants.
” But in spite of this he instructed the jury that the plaintiff was entitled
to recover. This ruling was sustained on the ground that as matter of contract
the plaintiff as a passenger had the right to receive proper treatment from the
defendants and their servants and all of them. This decision has been followed
in other cases - of carriers of passengers. Hayne v. Union Street Railway, 189
Mass. 551. Jackson v. Old Colony Street Railway, 206 Mass. 477. Gentile v. Boston
Elevated Railway, 217 Mass. 113. In Levins v. New York, New Haven, & Hartford
Railroad, 183 Mass. 175, it was held that a case was * 138not made out under this
rule where a purse had been accidentally - left on the window sill of the wash
room of a car of the defendant company. In Fairbanks v. Boston Storage Warehouse
Co. 189 Mass. 419, it was held that it did not apply where an assault was made
by an attendant who under the rules of the defendant company accompanied the plaintiff
when he went to examine goods stored by him in the warehouse of the defendant.
The reason why the rule of Bryant v. Rich did not apply in the case of Fairbanks
v. Boston Storage Warehouse Co. was because of the fact that the employee who
made the assault was in attendance upon the plaintiff at the time in question
for the plaintiff ’ s own purposes. He was not a servant of the defendant to whose
services the plaintiff was entitled under his contract with the defendant. The
decision in Bryant v. Rich does not depend upon the fact that the defendants in
that case were common carriers. The decision would have been the same had the
assault and battery occurred on an excursion steamer in place of upon a steamer
operated by a common carrier. And the decision would have been the same if the
steward and waiters had stolen rings from Bryant ’ s fingers in place of knocking
him down as they did. The doctrine of Bryant v. Rich applies whenever there is
a contract between the plaintiff and defendant by force of which the defendant
is to furnish for the plaintiff ’ s comfort the services of its, the defendant
’ s, employees. Where the injury to the plaintiff is caused by an act of the defendant
’ s servants done in the course of their employment an action may be brought based
on negligence of the defendant ’ s servants for which the defendant is liable
because the act took place in the course of his servants ’ employment, or an action
may be brought in that case based on violation of the duty owed by the defendant
to the plaintiff under the contract between the defendant and the plaintiff. But
where ( as was the case in Bryant v. Rich and in the case at bar ) the injury
done the plaintiff is caused by an act of the defendant ’ s servants outside of
the servants ’ duty as employees of the defendant but by an act of the defendant
’ s servants which while not in the course of the servants ’ employment is none
the less a violation of the duty owed by the defendant under the defendant ’ s
contract with the plaintiff, the only action that can be brought is an action
founded upon the duty arising out of the contract. * 139The second count sufficiently
sets forth a liability on the part of the defendant for violation of its duty
under its contract with the plaintiff. It was held in Bryant v. Rich that “ for
a violation of such a contract either by force or negligence, the plaintiff may
bring an action of tort, or an action of contract. ” What has been said leaves
open the defence which arises out of the testimony that the plaintiff when received
into the hospital was asked to put into the custody of the defendant corporation
all her “ valuables. ” The defendant ’ s agent who received the plaintiff when
she came to. the hospital testified that that request was made to her at that
time. The plaintiff on the other hand testified that she was asked to put her
money into the custody of the hospital but that she was not asked to put anything
else into its custody. If the defendant ’ s evidence is believed, a defence is
made out. On the other hand if the plaintiff ’ s evidence on this matter is believed,
her rights depend upon the rule of Bryant v. Rich, ubi supra. Exceptions sustained.
sentences:
- What are the tax implications of operating a private hospital for profit?
- What legal principles determine a hospital's liability for the actions of its
employees under a contract with a patient?
- What are the legal implications of improperly imposed sublet surcharges in cooperative
housing disputes?
- source_sentence: Welsh, J. This is an action alleging negligence in the operation
of a motor vehicle. The case was tried before a jury. A verdict was returned indicating
that the defendant was not negligent The issue on appeal is whether the judge
erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the
general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the
duty of a motorist at an intersection governed by a stop sign ). We determine
there was no error. The following evidence was adduced at trial. On January 9,
1996, the plaintiff was operating a motor vehicle on Revere Street a public way
in Quincy. She testified that she came to a complete stop at a “ stop ” sign at
the intersection of Revere Street and Mechanic Street also a public way. A large
mound of snow obstructed her view and she was unable to see the intersection.
She proceeded out into the intersection and stopped again about half way into
the intersection. The passable roadway was narrowed considerably due to the snow
banks on the sides of the road. She allowed a white car to pass her and then started
up again. She testified that she saw the car operated by the defendant approaching
at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection,
making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle
struck the left side of the plaintiffs vehicle, with left hand side damage to
the defendant ' s vehicle. The defendant testified that the plaintiff did not
stop. The jury determined that the defendant was not negligent The court gave
comprehensive instructions on the elements of negligence and the duty of care.
The court specifically instructed the jury as to the issue of violation of a statute
as evidence of negligence, taking pains to explain that the violation, if found,
must be a contributing factor to the damage sustained by the plaintiff. See Minnehan
v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty
to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel
with the judge is that he failed specifically to instruct as she requested regarding
G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive
at an intersection at approximately the same time. There was no error. G. L. c.
89, § 8 expressly provides that its provisions do not * 138apply when an operator
is otherwise directed by a traffic regulatory sign erected and maintained in accordance
with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ).
See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute
that is primarily applicable to intersections governed by stop signs. As stated
in Canane, one directed to stop by a stop sign may not have the benefit of the
general rule if the rule grants him the right of way, until he has complied with
the order to stop. After stopping, the operator becomes subject to the general
rule and may proceed and thereafter exercise the right of way in accordance with
that rule. Id. at 75. However, the operator must proceed into the intersection
with due care. Even if the operator has the right of way under c. 89, § 8, that
right is subject to the requirement of using due care. Possession of the right
of way is only one factor to be considered in deciding whether the operator has
fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped
at a “ stop ” sign may still be found to be negligent if he proceeds into the
intersection without using due care. The duty to exercise due care requires an
operator who has halted at a stop sign to behave with reasonable caution before
entering the intersection. Even an operator who has stopped at a stop sign and
has a “ right of way ” under § 8 may be found to be negligent if he proceeds into
the intersection before he can do so with reasonable prudence and with suitable
regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612,
616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute,
but is subject to the condition of due care as to its exercise. With these principles
in mind, we turn to the judge ’ s charge. At the outset, we observe that it is
not required that the judge charge the jury in the precise formulation proposed
[ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge
fairly and adequately covers the point in the charge. See Comeau v. Beck, 319
Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated
somewhat differently, the denial of requested instruction does not constitute
error where the requested instructions were covered substantially in the charge.
Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive
instructions on the concept of negligence in the context of operating of motor
vehicles. He explained the duty of a motorist with regard to intersections controlled
by stop signs. This explanation included the duty to yield to vehicles in or in
close proximity to the intersection. While the instruction did not follow precisely
the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction
properly stressed the duty of due care when proceeding into the intersection governed
by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another
rule of the road is that every driver approaching a stop sign, shall stop at a
clearly marked stop line, and if there is not a stop line, then [ at ] a point
nearest the intersecting roadway before entering it After having stopped, the
driver shall yield the right of way to every vehicle in the intersection or approaching
in [ the ] other roadway so closely as to constitute an immediate hazard during
the time when the driver is moving across or within the intersection. ”
sentences:
- How is rent abatement calculated in cases involving a breach of the warranty of
habitability in Section 8 housing?
- What are the legal requirements for establishing a valid contract in business
law?
- What is the legal duty of care for drivers at intersections with stop signs?
model-index:
- name: modernbert-embed-base trained on triplets
results:
- task:
type: triplet
name: Triplet
dataset:
name: dev
type: dev
metrics:
- type: cosine_accuracy
value: 1
name: Cosine Accuracy
- type: cosine_accuracy
value: 1
name: Cosine Accuracy
---
# modernbert-embed-base trained on triplets
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [nomic-ai/modernbert-embed-base](https://huggingface.co/nomic-ai/modernbert-embed-base) <!-- at revision d556a88e332558790b210f7bdbe87da2fa94a8d8 -->
- **Maximum Sequence Length:** 8192 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Free-Law-Project/modernbert-embed-base_finetune_8192")
# Run inference
sentences = [
"Welsh, J. This is an action alleging negligence in the operation of a motor vehicle. The case was tried before a jury. A verdict was returned indicating that the defendant was not negligent The issue on appeal is whether the judge erred in failing to instruct the jury in accordance with G. L. c. 89, § 8, ( the general “ right of way ” at intersections ) as well as G. L. c. 89, § 9 ( the duty of a motorist at an intersection governed by a stop sign ). We determine there was no error. The following evidence was adduced at trial. On January 9, 1996, the plaintiff was operating a motor vehicle on Revere Street a public way in Quincy. She testified that she came to a complete stop at a “ stop ” sign at the intersection of Revere Street and Mechanic Street also a public way. A large mound of snow obstructed her view and she was unable to see the intersection. She proceeded out into the intersection and stopped again about half way into the intersection. The passable roadway was narrowed considerably due to the snow banks on the sides of the road. She allowed a white car to pass her and then started up again. She testified that she saw the car operated by the defendant approaching at a speed of 45 miles per hour ; nevertheless she proceeded through the intersection, making a left turn in the path of the oncoming vehicle. The defendant ’ s vehicle struck the left side of the plaintiffs vehicle, with left hand side damage to the defendant ' s vehicle. The defendant testified that the plaintiff did not stop. The jury determined that the defendant was not negligent The court gave comprehensive instructions on the elements of negligence and the duty of care. The court specifically instructed the jury as to the issue of violation of a statute as evidence of negligence, taking pains to explain that the violation, if found, must be a contributing factor to the damage sustained by the plaintiff. See Minnehan v. Hiland, 278 Mass. 518, 523 ( 1932 ). He specifically charged as to the duty to stop at a stop sign as provided by G. L. c. 89, § 9. 2 The plaintiff ’ s quarrel with the judge is that he failed specifically to instruct as she requested regarding G. L. c. 89, § 8, the general duty of care applicable when two motorists arrive at an intersection at approximately the same time. There was no error. G. L. c. 89, § 8 expressly provides that its provisions do not * 138apply when an operator is otherwise directed by a traffic regulatory sign erected and maintained in accordance with the provision of Sec. 2 of Ch. 85 ( which would include “ stop ” signs ). See Canane v. Dandini, 355 Mass. 72, 75 ( 1968 ). G. L. c. 89, § 9 is the statute that is primarily applicable to intersections governed by stop signs. As stated in Canane, one directed to stop by a stop sign may not have the benefit of the general rule if the rule grants him the right of way, until he has complied with the order to stop. After stopping, the operator becomes subject to the general rule and may proceed and thereafter exercise the right of way in accordance with that rule. Id. at 75. However, the operator must proceed into the intersection with due care. Even if the operator has the right of way under c. 89, § 8, that right is subject to the requirement of using due care. Possession of the right of way is only one factor to be considered in deciding whether the operator has fulfilled his duty of due care. Id. at 76. Accordingly, an operator who has stopped at a “ stop ” sign may still be found to be negligent if he proceeds into the intersection without using due care. The duty to exercise due care requires an operator who has halted at a stop sign to behave with reasonable caution before entering the intersection. Even an operator who has stopped at a stop sign and has a “ right of way ” under § 8 may be found to be negligent if he proceeds into the intersection before he can do so with reasonable prudence and with suitable regard for his safety and that of others. Freyermuth v. Lutfy, 376 Mass., 612, 616, N. 3. ( 1978 ). Again, the “ right of way ^ rule in § 8 is not absolute, but is subject to the condition of due care as to its exercise. With these principles in mind, we turn to the judge ’ s charge. At the outset, we observe that it is not required that the judge charge the jury in the precise formulation proposed [ see Poole v. Boston & Main Ry., 216 Mass. 12, 15 ( 1913 ) ] so long as the judge fairly and adequately covers the point in the charge. See Comeau v. Beck, 319 Mass. 17, 10 ( 1946 ) ; Squires v. Fraska, 301 Mass. 474, 476 ( 1938 ). Stated somewhat differently, the denial of requested instruction does not constitute error where the requested instructions were covered substantially in the charge. Pearlin v. Farrell, 356 Mass. 741 ( 1970 ). The judge gave detailed and comprehensive instructions on the concept of negligence in the context of operating of motor vehicles. He explained the duty of a motorist with regard to intersections controlled by stop signs. This explanation included the duty to yield to vehicles in or in close proximity to the intersection. While the instruction did not follow precisely the formulation suggested in the Canane and Freyermuth cases, the judge ’ s instruction properly stressed the duty of due care when proceeding into the intersection governed by the stop sign after having stopped. Appeal dismissed. So ordered. “ Another rule of the road is that every driver approaching a stop sign, shall stop at a clearly marked stop line, and if there is not a stop line, then [ at ] a point nearest the intersecting roadway before entering it After having stopped, the driver shall yield the right of way to every vehicle in the intersection or approaching in [ the ] other roadway so closely as to constitute an immediate hazard during the time when the driver is moving across or within the intersection. ”",
'What is the legal duty of care for drivers at intersections with stop signs?',
'What are the legal requirements for establishing a valid contract in business law?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
#### Triplet
* Dataset: `dev`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:--------------------|:--------|
| **cosine_accuracy** | **1.0** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Free-Law-Project/opinions-synthetic-query-8192
* Size: 351 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 351 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 62 tokens</li><li>mean: 2810.15 tokens</li><li>max: 7455 tokens</li></ul> | <ul><li>min: 12 tokens</li><li>mean: 18.93 tokens</li><li>max: 31 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.86 tokens</li><li>max: 21 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------|
| <code>DISTRICT COURT OF APPEAL OF THE STATE OF FLORIDA FOURTH DISTRICT EURICE McGILL, Appellant, v. STATE OF FLORIDA, Appellee. No. 4D17 - 1492 [ August 31, 2017 ] Appeal of order denying rule 3. 850 motion from the Circuit Court for the Seventeenth Judicial Circuit, Broward County ; Paul L. Backman, Judge ; L. T. Case No. 10 - 12523CF10A. Eurice McGill, Lake City, pro se. No appearance required for appellee. PER CURIAM. Affirmed. WARNER, DAMOORGIAN and KUNTZ, JJ., concur. * * * Not final until disposition of timely filed motion for rehearing.</code> | <code>What are the grounds for denying a Rule 3.850 motion in Florida courts?</code> | <code>What are the qualifications to file for an eviction in Florida?</code> |
| <code>Twersky v Incorporated Vil. of Great Neck ( 2015 NY Slip Op 02755 ) Twersky v Incorporated Vil. of Great Neck 2015 NY Slip Op 02755 Decided on April 1, 2015 Appellate Division, Second Department Published by New York State Law Reporting Bureau pursuant to Judiciary Law § 431. This opinion is uncorrected and subject to revision before publication in the Official Reports. Decided on April 1, 2015 SUPREME COURT OF THE STATE OF NEW YORK Appellate Division, Second Judicial Department RANDALL T. ENG, P. J. LEONARD B. AUSTIN JEFFREY A. COHEN BETSY BARROS, JJ. 2014 - 07552 ( Index No. 9576 / 12 ) [ * 1 ] Sharon Twersky, respondent, v Incorporated Village of Great Neck, et al., defendants, FHM Mortgage Corp., et al., appellants. Cascone & Kluepfel, LLP, Garden City, N. Y. ( Howard B. Altman of counsel ), for appellants. Isaacson, Schiowitz & Korson, LLP, Rockville Centre, N. Y. ( Jeremy Schiowitz of counsel ), for respondent. DECISION & ORDER In an action to recover damages for personal injurie...</code> | <code>What legal principles determine a property owner's duty to maintain safe conditions for pedestrians?</code> | <code>What are the tax implications of selling a property in New York State?</code> |
| <code>951 A. 2d 180 ( 2008 ) Philip S. HORNER v. GOVERNOR, State of New Hampshire and another. No. 2007 - 668. Supreme Court of New Hampshire. Argued March 27, 2008. Opinion Issued : June 19, 2008. * 181 Philip S. Horner, pro se, and Richard E. Samdperil, of Exeter ( Mr. Horner on the brief, and Mr. Samdperil orally ), for the plaintiff. Kelly A. Ayotte, attorney general ( Karen A. Schlitzer, assistant attorney general, on the memorandum of law and orally ), for the defendants. BRODERICK, C. J. The plaintiff, Philip S. Horner, appeals an order of the Superior Court ( Smukler, * 182 J. ) denying his petition for a writ of prohibition to enjoin the State from enforcing RSA 651 - B : 11 ( 2007 & Supp. 2007 ), which mandates the collection of a sex offender registration fee. We affirm. The plaintiff was convicted in 2000 of five counts of felonious sexual assault, see RSA 632 - A : 3 ( 2007 ). Every sex offender and offender against children is required to register with the New Hampshire Divisio...</code> | <code>What determines whether a charge is classified as a tax or a fee under New Hampshire law?</code> | <code>What are the tax implications of forming a non-profit organization in the United States?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Free-Law-Project/opinions-synthetic-query-8192
* Size: 95 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 95 samples:
| | anchor | positive | negative |
|:--------|:---------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 73 tokens</li><li>mean: 1723.31 tokens</li><li>max: 7494 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 18.89 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 14.46 tokens</li><li>max: 20 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------|
| <code>Mr. Justice Mercur delivered the opinion of the court, November 20th 1882. Both parties claim title to this land under sheriff ’ s sale as the property of James Strouss. The defendant purchased at a sale made in December 1815, the plaintiff at one made in March 1880. The plaintiff seeks to impeach the validity of the first sale * 411on the ground that it was made in fraud of the creditors of Strouss. The law presumes that a public judicial sale is made in good faith. This presumption stands, unless overthrown by clear and satisfactory evidence of fraud or unfair means. The contention was one of fact. Much evidence Avas given bearing on the question, and some of it conflicting. The learned judge submitted the case to the jury in a clear and correct charge. He instructed them that if the sheriff ’ s sale was made with the intention of hindering, delaying or defeating creditors, and the purchaser had knowledge of such, it was null and void, although the full value of the property may have...</code> | <code>What are the legal principles governing fraud and sale validity in sheriff's sales?</code> | <code>What are the legal implications of intellectual property infringement?</code> |
| <code>217 N. J. Super. 541 ( 1987 ) 526 A. 2d 290 ALAN C. STAVER, PLAINTIFF, v. MARGARET STAVER, DEFENDANT. Superior Court of New Jersey, Chancery Division Bergen County, Family Part. March 11, 1987. * 543 Donald L. Garber for plaintiff ( Donald L. Garber, attorney ; Michael I. Lubin on the brief ). John Fiorello for defendant ( Feldman, Feldman, Hoffman & Fiorello, attorneys ). SIMON, MARGUERITE T., J. S. C. Plaintiff husband brings this motion seeking to terminate his obligation to pay alimony to defendant pursuant to a judgment of divorce entered September 6, 1974. Defendant wife brings a cross - motion for enforcement of the judgment. At the time of the entry of the final judgment, plaintiff was employed as an ordained minister earning approximately $ 12, 000 a year. The parties entered into a consensual agreement which was incorporated into the judgment. Two pertinent stipulations of the agreement are as follows : ( 1 ) " Said alimony of $ 500 per month shall continue in effect regardle...</code> | <code>Can pension benefits accrued after a divorce be considered as income for modifying alimony payments?</code> | <code>What are the tax implications of forming a limited liability company (LLC)?</code> |
| <code>Howard, J. : By the ' will of Byron S. Briggs, which was offered for probate in the Surrogate ’ s Court of Madison county, Harriet 0. Briggs, his wife, was appointed executrix. After the surrogate had overruled certain objections to the probate of the will and announced his conclusion that the will should be admitted to probate, written objections were filed to the issuance of letters testamentary to the widow, on the ground that she had deliberately murdered the testator for the purpose of thwarting any attempt on his part to make another will. The objections were filed by the son of the testator ; and his attitude of opposition to the widow was approved by a granddaughter of the testator. These two persons were descendants of the testator by a former wife. They were legatees under the will and had a statutory right to make objections. ( See Code Civ. Proc. § 2636. ) They stood ready with the witnesses in court and offered to make proof of the serious charges which they had preferred ...</code> | <code>Can someone accused of murdering a testator be appointed as an executor of the will?</code> | <code>What are the tax implications for inheriting property in the United States?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 2
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Validation Loss | dev_cosine_accuracy |
|:------:|:----:|:---------------:|:-------------------:|
| -1 | -1 | - | 0.9895 |
| 0.5682 | 100 | 0.0288 | 0.9895 |
| 1.1364 | 200 | 0.0317 | 1.0 |
| 1.7045 | 300 | 0.0166 | 1.0 |
| -1 | -1 | - | 1.0 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION"
] | [
"BEAR"
] |
medspaner/roberta-es-clinical-trials-medic-attr-ner | medspaner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-08-04T15:12:40 | 2024-10-01T06:42:18 | 35 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: Azitromicina en suspensión oral, 10 mg/kg una vez al día durante siete días
- text: A un grupo se le administró Ciprofloxacino 200 mg bid EV y al otro Cefazolina
1 g tid IV
- text: Administración de una solución de mantenimiento intravenosa isotónica (NaCl
al 0,9% en dextrosa al 5%)
- text: Se excluyen pacientes con contraindicación a aspirina o clopidogrel
model-index:
- name: roberta-es-clinical-trials-medic-attr-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-es-clinical-trials-medic-attr-ner
This named entity recognition model detects medication-related information:
- Contraindication: e.g. *contraindicación a **aspirina***
- Dose, strength or concentration: e.g. *14 mg*, *100.000 UI*
- Form: e.g. *tabletas*, *comprimidos*
- Route: e.g. *vía oral*, *i.v.*
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.863 (±0.011)
- Recall: 0.878 (±0.008)
- F1: 0.871 (±0.001)
- Accuracy: 0.997 (±0.001)
## Model description
This model adapts the pre-trained model [bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es), presented in [Pio Carriño et al. (2022)](https://aclanthology.org/2022.bionlp-1.19/).
It is fine-tuned to conduct temporal named entity recognition on Spanish texts about clinical trials.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z).
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: average 12 epochs (±3.1); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.863 (±0.011) | 0.878 (±0.008) | 0.871 (±0.001) | 0.997 (±0.001) |
**Results per class (test set; average and standard deviation of 5 rounds with different seeds)**
| Class | Precision | Recall | F1 | Support |
|:---------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Contraindicated | 0.752 (±0.089) | 0.847 (±0.077) | 0.791 (±0.041) | 76 |
| Dose | 0.830 (±0.032) | 0.838 (±0.035) | 0.833 (±0.001) | 320 |
| Form | 0.971 (±0.029) | 0.889 (±0.024) | 0.928 (±0.021) | 74 |
| Route | 0.934 (±0.012) | 0.916 (±0.024) | 0.925 (±0.012) | 270 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
IDEA-CCNL/Randeng-DELLA-226M-Chinese | IDEA-CCNL | text-generation | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"VAE",
"Generation",
"zh",
"arxiv:2207.06130",
"arxiv:2209.02970",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2022-10-21T09:35:56 | 2023-05-26T06:43:57 | 35 | 1 | ---
language: zh
tags:
- VAE
- Generation
inference: false
---
# Randeng-DELLA-226M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
在悟道数据集上进行通用预训练的Deep VAE模型。其中编码器和解码器都是GPT-2架构。可以用于下游的句子重写,语义转换,性质控制等任务。
A deep VAE model pretrained on Wudao dataset. Both encoder and decoder are based on GPT-2 architecture. Such model is particularly suitable for paraphrasing, semantic updating and fine-grained attributes control.
**请注意本模型是在通用语料上进行的预训练。这增加了模型的泛化能力使其能够在微调时快速适应到下游特定领域上,但同时也弱化了其对通用文本的重构能力。如要获得最佳效果请在特定领域微调后使用,并参考本系列开源的CVAE的做法与效果 [Randeng-DELLA-226M-CVAE-NER-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-DELLA-226M-CVAE-NER-Chinese)。**
**Please bear in mind that this model is pre-trained in open domian dataset. Such pretraining enhanced its generalizability and made it capable of adapting to specific domain easily, however it also lessened its strength to reconstruct given texts. To get the maximum effect of this model, consider finetuning it in your desired task domain. You can find such example in [Randeng-DELLA-226M-CVAE-NER-Chinese](https://huggingface.co/IDEA-CCNL/Randeng-DELLA-226M-CVAE-NER-Chinese)**
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言生成 NLG | 燃灯 Randeng | DELLA | 226M | 变分自编码器-中文 VAE-Chinese |
## 模型信息 Model Information
参考论文 Reference Paper:[Fuse It More Deeply! A Variational Transformer with Layer-Wise Latent Variable Inference for Text Generation](https://arxiv.org/abs/2207.06130)
本模型使用了Della论文里的循环潜在向量架构,但对于解码器生成并未采用原论文的low-rank-tensor-product来进行信息融合,而是使用了简单的线性变换后逐位逐词添加的方式。该方式对于开放域数据集的预训练稳定性有较大正向作用。
Note that although we adopted the layer-wise recurrent latent variables structure as the paper, we did not use the low-rank-tensor-product to fuse the latent vectors to the decoder hidden states. Instead we applied a simple linear transformation on the latent vectors and then add them to the hidden states independently.
## 使用 Usage
```python
# Checkout the latest Fengshenbang-LM directory and run following script under Fengshenbang-LM root directory
import torch
from torch.nn.utils.rnn import pad_sequence
from fengshen.models.deepVAE.deep_vae import Della
from transformers.models.bert.tokenization_bert import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("IDEA-CCNL/Randeng-DELLA-226M-Chinese")
vae_model = Della.from_pretrained("IDEA-CCNL/Randeng-DELLA-226M-Chinese")
special_tokens_dict = {'bos_token': '<BOS>', 'eos_token': '<EOS>'}
tokenizer.add_special_tokens(special_tokens_dict)
sentence = "本模型是在通用数据集下预训练的VAE模型,如要获得最佳效果请在特定领域微调后使用。"
tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(sentence))
decoder_target = [tokenizer.bos_token_id] + tokenized_text + [tokenizer.eos_token_id]
inputs = []
inputs.append(torch.tensor(decoder_target, dtype=torch.long))
inputs = pad_sequence(inputs, batch_first=True, padding_value=0)
max_length = 256
top_p = 0.5
top_k = 0
temperature = .7
repetition_penalty = 1.0
sample = False
device = 0
model = vae_model.eval()
model = model.to(device)
outputs = model.model.inference(inputs.to(device), top_p=top_p, top_k=top_k, max_length=max_length, sample=sample,
temperature=temperature, repetition_penalty=repetition_penalty)
for gen_sent, orig_sent in zip(outputs, inputs):
print('orig_sent:', tokenizer.decode(orig_sent).replace(' ', ''))
print('gen_sent:', tokenizer.decode(gen_sent).replace(' ', ''))
print("-"*20)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | [
"PARAPHRASING"
] | [
"BEAR"
] |
YoLo2000/TiLamb-7B | YoLo2000 | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"bo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-02T12:20:16 | 2024-04-03T01:08:11 | 35 | 1 | ---
language:
- bo
license: apache-2.0
---
<!-- Provide a longer summary of what this model is. -->
# TiLamb-7B(Tibetan Large Language Model Base)
**TiLamb-7B** 是藏文大语言模型的基座模型,它使用了 26.43GB 的藏文语料,基于Meta发布的可商用大模型 LLaMA2-7B 模型,通过 LoRA 方法进行了增量预训练。该模型在 LLaMA2 的基础上扩展了词表,从原有的词表大小 32,000 扩充藏文词汇至 61,221 ,并对 LLaMA2-7B 原始模型的 embedding 和 lm_head 进行了均值扩充初始化。更多信息请访问 [TiLamb-7B GitHub 主页](https://github.com/NLP-Learning/TiLamb)。
**重要说明**:
- TiLamb-7B 是一个未经监督微调的基座模型,**不具备对话能力**。
- 要进行藏文对话和藏文 NLP 下游任务的适配(已验证的任务包括藏文新闻分类、藏文实体关系分类、藏文机器阅读理解、藏文分词、藏文摘要、藏文问题回答和藏文问题生成),建议使用 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) 框架进行微调。
**使用须知**:
- 本项目基于 Meta 发布的 LLaMA2-7B 模型开发,使用时请严格遵守 LLaMA2-7B 的开源许可协议。
- 如果涉及使用第三方代码,请务必遵从相关的开源许可协议。
- 模型生成的内容准确性可能受到计算方法、随机因素等的影响,因此,我们不对模型输出的准确性提供任何保证,也不会对使用相关资源和输出结果产生的任何损失承担责任。
- 如果将相关模型用于商业用途,开发者应遵守当地法律法规,确保模型输出内容的合规性。本项目不对任何由此衍生的产品或服务承担责任。
# TiLamb-7B (Tibetan Large Language Model Base)
**TiLamb-7B** is the foundational model for the Tibetan language, utilizing 26.43GB of Tibetan corpora. It's based on Meta's commercially available large model, LLaMA2-7B, and has been incrementally pre-trained using the LoRA method. This model expands on LLaMA2 by enlarging the vocabulary from the original 32,000 to 61,221 Tibetan words and initializes the embedding and lm_head of the original LLaMA2-7B model through mean expansion. For more information, please visit the [TiLamb-7B GitHub page](https://github.com/NLP-Learning/TiLamb).
**Important Notes**:
- TiLamb-7B is an unsupervised fine-tuned base model, **lacking conversational capabilities**.
- For adaptation to Tibetan dialogue and Tibetan NLP downstream tasks (verified tasks include Tibetan news classification, Tibetan entity relation classification, Tibetan machine reading comprehension, Tibetan word segmentation, Tibetan summarization, Tibetan question answering, and Tibetan question generation), it is recommended to use the [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/tree/main) framework for fine-tuning.
**Usage Notice**:
- This project is developed based on the LLaMA2-7B model released by Meta, and its use must strictly adhere to the open-source license agreement of LLaMA2-7B.
- If third-party code is involved, it is essential to comply with the relevant open-source license agreements.
- The accuracy of the content generated by the model may be affected by computational methods, random factors, etc., therefore, we do not provide any guarantee for the accuracy of the model outputs, nor will we bear any responsibility for losses arising from the use of related resources and results.
- If the related models are used for commercial purposes, developers must comply with local laws and regulations to ensure the compliance of the model output content. This project will not bear any responsibility for any products or services derived from it.
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"BEAR"
] |
Mihaiii/Squirtle | Mihaiii | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"bge",
"mteb",
"dataset:Mihaiii/qa-assistant",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-30T15:06:52 | 2024-04-30T20:00:05 | 35 | 1 | ---
datasets:
- Mihaiii/qa-assistant
library_name: sentence-transformers
license: mit
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- bge
- mteb
model-index:
- name: Squirtle
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 69.59701492537313
- type: ap
value: 31.80839087521638
- type: f1
value: 63.43204352573031
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 82.09027499999999
- type: ap
value: 76.95004336850603
- type: f1
value: 82.04505556179174
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 41.943999999999996
- type: f1
value: 40.40964457303876
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 13.869000000000002
- type: map_at_10
value: 24.631
- type: map_at_100
value: 25.965
- type: map_at_1000
value: 26.023000000000003
- type: map_at_20
value: 25.442999999999998
- type: map_at_3
value: 20.827
- type: map_at_5
value: 22.776
- type: mrr_at_1
value: 14.580000000000002
- type: mrr_at_10
value: 24.91
- type: mrr_at_100
value: 26.229999999999997
- type: mrr_at_1000
value: 26.288
- type: mrr_at_20
value: 25.708
- type: mrr_at_3
value: 21.136
- type: mrr_at_5
value: 23.02
- type: ndcg_at_1
value: 13.869000000000002
- type: ndcg_at_10
value: 31.14
- type: ndcg_at_100
value: 37.885999999999996
- type: ndcg_at_1000
value: 39.497
- type: ndcg_at_20
value: 34.068
- type: ndcg_at_3
value: 23.163
- type: ndcg_at_5
value: 26.677
- type: precision_at_1
value: 13.869000000000002
- type: precision_at_10
value: 5.220000000000001
- type: precision_at_100
value: 0.844
- type: precision_at_1000
value: 0.097
- type: precision_at_20
value: 3.186
- type: precision_at_3
value: 9.981
- type: precision_at_5
value: 7.696
- type: recall_at_1
value: 13.869000000000002
- type: recall_at_10
value: 52.205
- type: recall_at_100
value: 84.42399999999999
- type: recall_at_1000
value: 97.297
- type: recall_at_20
value: 63.727000000000004
- type: recall_at_3
value: 29.942999999999998
- type: recall_at_5
value: 38.478
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 33.042527574996505
- type: v_measures
value:
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- 0.2896613951792161
- 0.2974905938215674
- 0.28195491579456905
- 0.3008325954323272
- 0.3012695848509836
- 0.28933380000430453
- 0.297420818100457
- 0.2792041800887245
- 0.3049968405105834
- 0.30704380358904726
- 0.39238640618067383
- 0.3932595512850983
- 0.3875472939281748
- 0.39822946285500505
- 0.39839156092566014
- 0.40184636328122075
- 0.39008499175162326
- 0.3984035967802891
- 0.39159106298575347
- 0.3923217036338575
- 0.3916410911561569
- 0.2357749280106326
- 0.23682806457721106
- 0.3122239617657793
- 0.26610676013174756
- 0.18123482803921434
- 0.2504695156635453
- 0.10917464735757001
- 0.16714512698028008
- 1.0
- 0.19931410358764295
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 24.68133686033884
- type: v_measures
value:
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- 0.2005976632299017
- 0.208968006943616
- 0.20946008190179435
- 0.20539809799180958
- 0.21463587994609631
- 0.20913407901977635
- 0.20908020832330956
- 0.1944493063711425
- 0.20181175619582953
- 0.2249901827151246
- 0.29132293951181787
- 0.29570222215271086
- 0.2796075942678196
- 0.28871411057617774
- 0.29302758518431116
- 0.29227253592096986
- 0.2856462545898644
- 0.28687743467743254
- 0.2900793948371436
- 0.28627385826697854
- 0.27308659940457203
- 0.14117319401377473
- 0.1761477350541332
- 0.24048342650129406
- 0.19387054212465876
- 0.14470023981605995
- 0.16704070762984086
- 0.07547453139959907
- 0.127993495025131
- 1.0
- 0.14319476311235024
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 52.344372012529384
- type: mrr
value: 65.32614430813877
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 69.44065444549933
- type: cos_sim_spearman
value: 71.77814153774398
- type: euclidean_pearson
value: 70.59416783558756
- type: euclidean_spearman
value: 71.77814153774398
- type: manhattan_pearson
value: 70.99287197201959
- type: manhattan_spearman
value: 72.0769435268729
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 67.12987012987013
- type: f1
value: 65.99991975715585
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 30.861774505346606
- type: v_measures
value:
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- 0.3057878417529878
- 0.3086229109676654
- 0.3080657568280612
- 0.3002878816865892
- 0.30903247986282023
- 0.3022960257813801
- 0.31981283125167154
- 0.3119766955566159
- 0.3039859162306553
- 0.31630911061621453
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 21.100665285420916
- type: v_measures
value:
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- 0.21042268101320297
- 0.19607301651541253
- 0.21811669828359762
- 0.20892482431651227
- 0.20621532003083415
- 0.215815720040119
- 0.20517452774094483
- 0.21396360841093787
- 0.20967704706047804
- 0.22568308513005236
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 17.835
- type: map_at_10
value: 24.718999999999998
- type: map_at_100
value: 25.755
- type: map_at_1000
value: 25.887
- type: map_at_20
value: 25.217
- type: map_at_3
value: 23.076
- type: map_at_5
value: 23.96
- type: mrr_at_1
value: 23.033
- type: mrr_at_10
value: 29.868
- type: mrr_at_100
value: 30.757
- type: mrr_at_1000
value: 30.834
- type: mrr_at_20
value: 30.37
- type: mrr_at_3
value: 28.112
- type: mrr_at_5
value: 29.185
- type: ndcg_at_1
value: 23.033
- type: ndcg_at_10
value: 28.899
- type: ndcg_at_100
value: 33.788000000000004
- type: ndcg_at_1000
value: 36.962
- type: ndcg_at_20
value: 30.497000000000003
- type: ndcg_at_3
value: 26.442
- type: ndcg_at_5
value: 27.466
- type: precision_at_1
value: 23.033
- type: precision_at_10
value: 5.351
- type: precision_at_100
value: 0.9610000000000001
- type: precision_at_1000
value: 0.151
- type: precision_at_20
value: 3.2259999999999995
- type: precision_at_3
value: 12.923000000000002
- type: precision_at_5
value: 8.956
- type: recall_at_1
value: 17.835
- type: recall_at_10
value: 36.034
- type: recall_at_100
value: 57.615
- type: recall_at_1000
value: 79.72
- type: recall_at_20
value: 41.894999999999996
- type: recall_at_3
value: 28.313
- type: recall_at_5
value: 31.639
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 12.166
- type: map_at_10
value: 16.320999999999998
- type: map_at_100
value: 16.954
- type: map_at_1000
value: 17.054
- type: map_at_20
value: 16.651
- type: map_at_3
value: 14.890999999999998
- type: map_at_5
value: 15.695999999999998
- type: mrr_at_1
value: 15.287
- type: mrr_at_10
value: 19.487
- type: mrr_at_100
value: 20.11
- type: mrr_at_1000
value: 20.185
- type: mrr_at_20
value: 19.830000000000002
- type: mrr_at_3
value: 18.068
- type: mrr_at_5
value: 18.855
- type: ndcg_at_1
value: 15.287
- type: ndcg_at_10
value: 19.198999999999998
- type: ndcg_at_100
value: 22.395
- type: ndcg_at_1000
value: 25.106
- type: ndcg_at_20
value: 20.297
- type: ndcg_at_3
value: 16.743
- type: ndcg_at_5
value: 17.855999999999998
- type: precision_at_1
value: 15.287
- type: precision_at_10
value: 3.605
- type: precision_at_100
value: 0.638
- type: precision_at_1000
value: 0.108
- type: precision_at_20
value: 2.166
- type: precision_at_3
value: 8.089
- type: precision_at_5
value: 5.822
- type: recall_at_1
value: 12.166
- type: recall_at_10
value: 24.701999999999998
- type: recall_at_100
value: 39.199
- type: recall_at_1000
value: 58.205
- type: recall_at_20
value: 28.791
- type: recall_at_3
value: 17.469
- type: recall_at_5
value: 20.615
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 19.667
- type: map_at_10
value: 27.163999999999998
- type: map_at_100
value: 28.044000000000004
- type: map_at_1000
value: 28.142
- type: map_at_20
value: 27.645999999999997
- type: map_at_3
value: 24.914
- type: map_at_5
value: 26.078000000000003
- type: mrr_at_1
value: 23.197000000000003
- type: mrr_at_10
value: 30.202
- type: mrr_at_100
value: 30.976
- type: mrr_at_1000
value: 31.047000000000004
- type: mrr_at_20
value: 30.636000000000003
- type: mrr_at_3
value: 28.004
- type: mrr_at_5
value: 29.164
- type: ndcg_at_1
value: 23.197000000000003
- type: ndcg_at_10
value: 31.618000000000002
- type: ndcg_at_100
value: 35.977
- type: ndcg_at_1000
value: 38.458
- type: ndcg_at_20
value: 33.242
- type: ndcg_at_3
value: 27.285999999999998
- type: ndcg_at_5
value: 29.163
- type: precision_at_1
value: 23.197000000000003
- type: precision_at_10
value: 5.26
- type: precision_at_100
value: 0.8200000000000001
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_20
value: 3.082
- type: precision_at_3
value: 12.247
- type: precision_at_5
value: 8.577
- type: recall_at_1
value: 19.667
- type: recall_at_10
value: 42.443
- type: recall_at_100
value: 62.254
- type: recall_at_1000
value: 80.44
- type: recall_at_20
value: 48.447
- type: recall_at_3
value: 30.518
- type: recall_at_5
value: 35.22
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 10.923
- type: map_at_10
value: 14.24
- type: map_at_100
value: 15.001000000000001
- type: map_at_1000
value: 15.092
- type: map_at_20
value: 14.623
- type: map_at_3
value: 13.168
- type: map_at_5
value: 13.678
- type: mrr_at_1
value: 11.525
- type: mrr_at_10
value: 15.187000000000001
- type: mrr_at_100
value: 15.939999999999998
- type: mrr_at_1000
value: 16.03
- type: mrr_at_20
value: 15.557000000000002
- type: mrr_at_3
value: 13.991999999999999
- type: mrr_at_5
value: 14.557
- type: ndcg_at_1
value: 11.525
- type: ndcg_at_10
value: 16.512999999999998
- type: ndcg_at_100
value: 20.445
- type: ndcg_at_1000
value: 23.398
- type: ndcg_at_20
value: 17.832
- type: ndcg_at_3
value: 14.224
- type: ndcg_at_5
value: 15.136
- type: precision_at_1
value: 11.525
- type: precision_at_10
value: 2.565
- type: precision_at_100
value: 0.484
- type: precision_at_1000
value: 0.076
- type: precision_at_20
value: 1.582
- type: precision_at_3
value: 5.989
- type: precision_at_5
value: 4.1579999999999995
- type: recall_at_1
value: 10.923
- type: recall_at_10
value: 22.695
- type: recall_at_100
value: 40.892
- type: recall_at_1000
value: 64.456
- type: recall_at_20
value: 27.607
- type: recall_at_3
value: 16.348
- type: recall_at_5
value: 18.504
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 5.409
- type: map_at_10
value: 8.584999999999999
- type: map_at_100
value: 9.392
- type: map_at_1000
value: 9.5
- type: map_at_20
value: 8.943
- type: map_at_3
value: 7.3
- type: map_at_5
value: 7.962
- type: mrr_at_1
value: 6.965000000000001
- type: mrr_at_10
value: 10.593
- type: mrr_at_100
value: 11.496
- type: mrr_at_1000
value: 11.578
- type: mrr_at_20
value: 11.021
- type: mrr_at_3
value: 8.976
- type: mrr_at_5
value: 9.797
- type: ndcg_at_1
value: 6.965000000000001
- type: ndcg_at_10
value: 11.056000000000001
- type: ndcg_at_100
value: 15.683
- type: ndcg_at_1000
value: 18.873
- type: ndcg_at_20
value: 12.331
- type: ndcg_at_3
value: 8.334
- type: ndcg_at_5
value: 9.512
- type: precision_at_1
value: 6.965000000000001
- type: precision_at_10
value: 2.177
- type: precision_at_100
value: 0.54
- type: precision_at_1000
value: 0.095
- type: precision_at_20
value: 1.468
- type: precision_at_3
value: 3.9800000000000004
- type: precision_at_5
value: 3.109
- type: recall_at_1
value: 5.409
- type: recall_at_10
value: 16.895
- type: recall_at_100
value: 38.167
- type: recall_at_1000
value: 61.783
- type: recall_at_20
value: 21.248
- type: recall_at_3
value: 9.518
- type: recall_at_5
value: 12.426
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 13.688
- type: map_at_10
value: 19.096
- type: map_at_100
value: 20.058
- type: map_at_1000
value: 20.194000000000003
- type: map_at_20
value: 19.595000000000002
- type: map_at_3
value: 17.313000000000002
- type: map_at_5
value: 18.41
- type: mrr_at_1
value: 17.132
- type: mrr_at_10
value: 22.95
- type: mrr_at_100
value: 23.799
- type: mrr_at_1000
value: 23.884
- type: mrr_at_20
value: 23.419999999999998
- type: mrr_at_3
value: 20.95
- type: mrr_at_5
value: 22.21
- type: ndcg_at_1
value: 17.132
- type: ndcg_at_10
value: 22.88
- type: ndcg_at_100
value: 27.572000000000003
- type: ndcg_at_1000
value: 30.824
- type: ndcg_at_20
value: 24.516
- type: ndcg_at_3
value: 19.64
- type: ndcg_at_5
value: 21.4
- type: precision_at_1
value: 17.132
- type: precision_at_10
value: 4.263999999999999
- type: precision_at_100
value: 0.7969999999999999
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 2.6519999999999997
- type: precision_at_3
value: 9.336
- type: precision_at_5
value: 6.93
- type: recall_at_1
value: 13.688
- type: recall_at_10
value: 30.537999999999997
- type: recall_at_100
value: 51.017999999999994
- type: recall_at_1000
value: 73.921
- type: recall_at_20
value: 36.174
- type: recall_at_3
value: 21.568
- type: recall_at_5
value: 26.127
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 8.173
- type: map_at_10
value: 11.648
- type: map_at_100
value: 12.434000000000001
- type: map_at_1000
value: 12.540000000000001
- type: map_at_20
value: 12.030000000000001
- type: map_at_3
value: 10.568
- type: map_at_5
value: 11.064
- type: mrr_at_1
value: 10.274
- type: mrr_at_10
value: 14.505
- type: mrr_at_100
value: 15.332
- type: mrr_at_1000
value: 15.409
- type: mrr_at_20
value: 14.899999999999999
- type: mrr_at_3
value: 13.375
- type: mrr_at_5
value: 13.929
- type: ndcg_at_1
value: 10.274
- type: ndcg_at_10
value: 14.283999999999999
- type: ndcg_at_100
value: 18.731
- type: ndcg_at_1000
value: 21.744
- type: ndcg_at_20
value: 15.647
- type: ndcg_at_3
value: 12.278
- type: ndcg_at_5
value: 12.974
- type: precision_at_1
value: 10.274
- type: precision_at_10
value: 2.683
- type: precision_at_100
value: 0.582
- type: precision_at_1000
value: 0.099
- type: precision_at_20
value: 1.7409999999999999
- type: precision_at_3
value: 6.088
- type: precision_at_5
value: 4.201
- type: recall_at_1
value: 8.173
- type: recall_at_10
value: 19.642
- type: recall_at_100
value: 40.213
- type: recall_at_1000
value: 62.083999999999996
- type: recall_at_20
value: 24.537
- type: recall_at_3
value: 13.700999999999999
- type: recall_at_5
value: 15.751000000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 11.252416666666667
- type: map_at_10
value: 15.589583333333334
- type: map_at_100
value: 16.381166666666665
- type: map_at_1000
value: 16.490333333333332
- type: map_at_20
value: 15.99116666666667
- type: map_at_3
value: 14.140916666666667
- type: map_at_5
value: 14.9045
- type: mrr_at_1
value: 13.710416666666664
- type: mrr_at_10
value: 18.34416666666667
- type: mrr_at_100
value: 19.110083333333336
- type: mrr_at_1000
value: 19.192583333333335
- type: mrr_at_20
value: 18.74783333333333
- type: mrr_at_3
value: 16.799416666666666
- type: mrr_at_5
value: 17.62725
- type: ndcg_at_1
value: 13.710416666666664
- type: ndcg_at_10
value: 18.628583333333335
- type: ndcg_at_100
value: 22.733666666666668
- type: ndcg_at_1000
value: 25.728499999999997
- type: ndcg_at_20
value: 19.994500000000002
- type: ndcg_at_3
value: 15.918083333333332
- type: ndcg_at_5
value: 17.086999999999996
- type: precision_at_1
value: 13.710416666666664
- type: precision_at_10
value: 3.3575
- type: precision_at_100
value: 0.6368333333333333
- type: precision_at_1000
value: 0.10508333333333333
- type: precision_at_20
value: 2.074833333333333
- type: precision_at_3
value: 7.440333333333333
- type: precision_at_5
value: 5.341916666666667
- type: recall_at_1
value: 11.252416666666667
- type: recall_at_10
value: 25.200833333333332
- type: recall_at_100
value: 44.075333333333326
- type: recall_at_1000
value: 66.12541666666665
- type: recall_at_20
value: 30.24916666666667
- type: recall_at_3
value: 17.46591666666667
- type: recall_at_5
value: 20.53691666666667
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 8.696
- type: map_at_10
value: 12.339
- type: map_at_100
value: 12.946
- type: map_at_1000
value: 13.04
- type: map_at_20
value: 12.6
- type: map_at_3
value: 11.06
- type: map_at_5
value: 11.530999999999999
- type: mrr_at_1
value: 10.276
- type: mrr_at_10
value: 14.463999999999999
- type: mrr_at_100
value: 15.07
- type: mrr_at_1000
value: 15.152
- type: mrr_at_20
value: 14.737
- type: mrr_at_3
value: 13.037
- type: mrr_at_5
value: 13.627
- type: ndcg_at_1
value: 10.276
- type: ndcg_at_10
value: 15.085
- type: ndcg_at_100
value: 18.538
- type: ndcg_at_1000
value: 21.461
- type: ndcg_at_20
value: 15.976
- type: ndcg_at_3
value: 12.454
- type: ndcg_at_5
value: 13.195
- type: precision_at_1
value: 10.276
- type: precision_at_10
value: 2.669
- type: precision_at_100
value: 0.48900000000000005
- type: precision_at_1000
value: 0.08
- type: precision_at_20
value: 1.572
- type: precision_at_3
value: 5.726
- type: precision_at_5
value: 3.9570000000000003
- type: recall_at_1
value: 8.696
- type: recall_at_10
value: 21.766
- type: recall_at_100
value: 38.269
- type: recall_at_1000
value: 61.106
- type: recall_at_20
value: 24.992
- type: recall_at_3
value: 14.032
- type: recall_at_5
value: 15.967999999999998
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 6.13
- type: map_at_10
value: 9.067
- type: map_at_100
value: 9.687999999999999
- type: map_at_1000
value: 9.792
- type: map_at_20
value: 9.384
- type: map_at_3
value: 8.006
- type: map_at_5
value: 8.581999999999999
- type: mrr_at_1
value: 7.605
- type: mrr_at_10
value: 11.111
- type: mrr_at_100
value: 11.745999999999999
- type: mrr_at_1000
value: 11.837
- type: mrr_at_20
value: 11.452
- type: mrr_at_3
value: 9.922
- type: mrr_at_5
value: 10.522
- type: ndcg_at_1
value: 7.605
- type: ndcg_at_10
value: 11.302
- type: ndcg_at_100
value: 14.629
- type: ndcg_at_1000
value: 17.739
- type: ndcg_at_20
value: 12.411
- type: ndcg_at_3
value: 9.28
- type: ndcg_at_5
value: 10.161000000000001
- type: precision_at_1
value: 7.605
- type: precision_at_10
value: 2.22
- type: precision_at_100
value: 0.46499999999999997
- type: precision_at_1000
value: 0.087
- type: precision_at_20
value: 1.428
- type: precision_at_3
value: 4.565
- type: precision_at_5
value: 3.3649999999999998
- type: recall_at_1
value: 6.13
- type: recall_at_10
value: 16.009999999999998
- type: recall_at_100
value: 31.467
- type: recall_at_1000
value: 54.722
- type: recall_at_20
value: 20.137
- type: recall_at_3
value: 10.347000000000001
- type: recall_at_5
value: 12.692
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 11.645
- type: map_at_10
value: 15.466
- type: map_at_100
value: 16.147
- type: map_at_1000
value: 16.247
- type: map_at_20
value: 15.806999999999999
- type: map_at_3
value: 14.011000000000001
- type: map_at_5
value: 14.967
- type: mrr_at_1
value: 14.179
- type: mrr_at_10
value: 18.512
- type: mrr_at_100
value: 19.184
- type: mrr_at_1000
value: 19.267
- type: mrr_at_20
value: 18.855
- type: mrr_at_3
value: 16.993
- type: mrr_at_5
value: 17.954
- type: ndcg_at_1
value: 14.179
- type: ndcg_at_10
value: 18.311
- type: ndcg_at_100
value: 21.996
- type: ndcg_at_1000
value: 24.942
- type: ndcg_at_20
value: 19.522000000000002
- type: ndcg_at_3
value: 15.593000000000002
- type: ndcg_at_5
value: 17.116
- type: precision_at_1
value: 14.179
- type: precision_at_10
value: 3.116
- type: precision_at_100
value: 0.5519999999999999
- type: precision_at_1000
value: 0.091
- type: precision_at_20
value: 1.87
- type: precision_at_3
value: 7.090000000000001
- type: precision_at_5
value: 5.224
- type: recall_at_1
value: 11.645
- type: recall_at_10
value: 24.206
- type: recall_at_100
value: 41.29
- type: recall_at_1000
value: 63.205999999999996
- type: recall_at_20
value: 28.659000000000002
- type: recall_at_3
value: 16.771
- type: recall_at_5
value: 20.602
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 12.435
- type: map_at_10
value: 17.263
- type: map_at_100
value: 18.137
- type: map_at_1000
value: 18.282999999999998
- type: map_at_20
value: 17.724
- type: map_at_3
value: 15.648000000000001
- type: map_at_5
value: 16.542
- type: mrr_at_1
value: 15.809999999999999
- type: mrr_at_10
value: 20.687
- type: mrr_at_100
value: 21.484
- type: mrr_at_1000
value: 21.567
- type: mrr_at_20
value: 21.124000000000002
- type: mrr_at_3
value: 19.104
- type: mrr_at_5
value: 19.974
- type: ndcg_at_1
value: 15.809999999999999
- type: ndcg_at_10
value: 20.801
- type: ndcg_at_100
value: 25.001
- type: ndcg_at_1000
value: 28.347
- type: ndcg_at_20
value: 22.223000000000003
- type: ndcg_at_3
value: 18.046
- type: ndcg_at_5
value: 19.308
- type: precision_at_1
value: 15.809999999999999
- type: precision_at_10
value: 4.032
- type: precision_at_100
value: 0.832
- type: precision_at_1000
value: 0.16
- type: precision_at_20
value: 2.54
- type: precision_at_3
value: 8.63
- type: precision_at_5
value: 6.4030000000000005
- type: recall_at_1
value: 12.435
- type: recall_at_10
value: 27.495000000000005
- type: recall_at_100
value: 47.522999999999996
- type: recall_at_1000
value: 70.804
- type: recall_at_20
value: 33.334
- type: recall_at_3
value: 19.192
- type: recall_at_5
value: 22.435
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 8.262
- type: map_at_10
value: 11.167
- type: map_at_100
value: 12.017999999999999
- type: map_at_1000
value: 12.113
- type: map_at_20
value: 11.674
- type: map_at_3
value: 9.736
- type: map_at_5
value: 10.384
- type: mrr_at_1
value: 9.242
- type: mrr_at_10
value: 12.564
- type: mrr_at_100
value: 13.427
- type: mrr_at_1000
value: 13.520999999999999
- type: mrr_at_20
value: 13.072000000000001
- type: mrr_at_3
value: 11.06
- type: mrr_at_5
value: 11.753
- type: ndcg_at_1
value: 9.242
- type: ndcg_at_10
value: 13.594999999999999
- type: ndcg_at_100
value: 18.049
- type: ndcg_at_1000
value: 20.888
- type: ndcg_at_20
value: 15.440000000000001
- type: ndcg_at_3
value: 10.697
- type: ndcg_at_5
value: 11.757
- type: precision_at_1
value: 9.242
- type: precision_at_10
value: 2.348
- type: precision_at_100
value: 0.482
- type: precision_at_1000
value: 0.077
- type: precision_at_20
value: 1.5709999999999997
- type: precision_at_3
value: 4.621
- type: precision_at_5
value: 3.401
- type: recall_at_1
value: 8.262
- type: recall_at_10
value: 19.983999999999998
- type: recall_at_100
value: 40.997
- type: recall_at_1000
value: 63.058
- type: recall_at_20
value: 27.168999999999997
- type: recall_at_3
value: 11.814
- type: recall_at_5
value: 14.463999999999999
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 4.058
- type: map_at_10
value: 6.734
- type: map_at_100
value: 7.593999999999999
- type: map_at_1000
value: 7.736999999999999
- type: map_at_20
value: 7.102
- type: map_at_3
value: 5.559
- type: map_at_5
value: 6.178999999999999
- type: mrr_at_1
value: 8.404
- type: mrr_at_10
value: 13.514999999999999
- type: mrr_at_100
value: 14.518
- type: mrr_at_1000
value: 14.599
- type: mrr_at_20
value: 14.025000000000002
- type: mrr_at_3
value: 11.584999999999999
- type: mrr_at_5
value: 12.588
- type: ndcg_at_1
value: 8.404
- type: ndcg_at_10
value: 10.02
- type: ndcg_at_100
value: 14.771999999999998
- type: ndcg_at_1000
value: 18.251
- type: ndcg_at_20
value: 11.378
- type: ndcg_at_3
value: 7.675
- type: ndcg_at_5
value: 8.558
- type: precision_at_1
value: 8.404
- type: precision_at_10
value: 3.212
- type: precision_at_100
value: 0.83
- type: precision_at_1000
value: 0.146
- type: precision_at_20
value: 2.186
- type: precision_at_3
value: 5.624
- type: precision_at_5
value: 4.5600000000000005
- type: recall_at_1
value: 4.058
- type: recall_at_10
value: 12.751999999999999
- type: recall_at_100
value: 30.219
- type: recall_at_1000
value: 50.749
- type: recall_at_20
value: 16.634
- type: recall_at_3
value: 7.234999999999999
- type: recall_at_5
value: 9.418
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 5.516
- type: map_at_10
value: 11.001
- type: map_at_100
value: 14.527999999999999
- type: map_at_1000
value: 15.417
- type: map_at_20
value: 12.446
- type: map_at_3
value: 8.269
- type: map_at_5
value: 9.345
- type: mrr_at_1
value: 43.5
- type: mrr_at_10
value: 54.078
- type: mrr_at_100
value: 54.655
- type: mrr_at_1000
value: 54.679
- type: mrr_at_20
value: 54.461999999999996
- type: mrr_at_3
value: 51.37500000000001
- type: mrr_at_5
value: 53.25
- type: ndcg_at_1
value: 33.125
- type: ndcg_at_10
value: 25.665
- type: ndcg_at_100
value: 28.116000000000003
- type: ndcg_at_1000
value: 34.477000000000004
- type: ndcg_at_20
value: 25.027
- type: ndcg_at_3
value: 28.4
- type: ndcg_at_5
value: 27.094
- type: precision_at_1
value: 43.5
- type: precision_at_10
value: 21.65
- type: precision_at_100
value: 6.351999999999999
- type: precision_at_1000
value: 1.306
- type: precision_at_20
value: 15.662
- type: precision_at_3
value: 32.333
- type: precision_at_5
value: 28.199999999999996
- type: recall_at_1
value: 5.516
- type: recall_at_10
value: 15.457
- type: recall_at_100
value: 32.903
- type: recall_at_1000
value: 53.81700000000001
- type: recall_at_20
value: 20.365
- type: recall_at_3
value: 9.528
- type: recall_at_5
value: 11.619
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 45.79
- type: f1
value: 38.89634882093881
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 18.063000000000002
- type: map_at_10
value: 24.911
- type: map_at_100
value: 25.688
- type: map_at_1000
value: 25.758
- type: map_at_20
value: 25.358999999999998
- type: map_at_3
value: 22.743
- type: map_at_5
value: 23.924
- type: mrr_at_1
value: 19.472
- type: mrr_at_10
value: 26.587
- type: mrr_at_100
value: 27.362
- type: mrr_at_1000
value: 27.428
- type: mrr_at_20
value: 27.040999999999997
- type: mrr_at_3
value: 24.362000000000002
- type: mrr_at_5
value: 25.593
- type: ndcg_at_1
value: 19.472
- type: ndcg_at_10
value: 29.183999999999997
- type: ndcg_at_100
value: 33.207
- type: ndcg_at_1000
value: 35.21
- type: ndcg_at_20
value: 30.791
- type: ndcg_at_3
value: 24.701999999999998
- type: ndcg_at_5
value: 26.823000000000004
- type: precision_at_1
value: 19.472
- type: precision_at_10
value: 4.469
- type: precision_at_100
value: 0.6629999999999999
- type: precision_at_1000
value: 0.08499999999999999
- type: precision_at_20
value: 2.59
- type: precision_at_3
value: 10.401
- type: precision_at_5
value: 7.363
- type: recall_at_1
value: 18.063000000000002
- type: recall_at_10
value: 41.071999999999996
- type: recall_at_100
value: 60.049
- type: recall_at_1000
value: 75.64699999999999
- type: recall_at_20
value: 47.211999999999996
- type: recall_at_3
value: 28.796
- type: recall_at_5
value: 33.894999999999996
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 2.45
- type: map_at_10
value: 4.255
- type: map_at_100
value: 4.809
- type: map_at_1000
value: 4.954
- type: map_at_20
value: 4.513
- type: map_at_3
value: 3.4029999999999996
- type: map_at_5
value: 3.782
- type: mrr_at_1
value: 4.938
- type: mrr_at_10
value: 8.231
- type: mrr_at_100
value: 8.902000000000001
- type: mrr_at_1000
value: 9.019
- type: mrr_at_20
value: 8.530999999999999
- type: mrr_at_3
value: 6.944
- type: mrr_at_5
value: 7.623
- type: ndcg_at_1
value: 4.938
- type: ndcg_at_10
value: 6.425
- type: ndcg_at_100
value: 9.661999999999999
- type: ndcg_at_1000
value: 13.911999999999999
- type: ndcg_at_20
value: 7.3
- type: ndcg_at_3
value: 4.907
- type: ndcg_at_5
value: 5.406
- type: precision_at_1
value: 4.938
- type: precision_at_10
value: 2.037
- type: precision_at_100
value: 0.528
- type: precision_at_1000
value: 0.125
- type: precision_at_20
value: 1.366
- type: precision_at_3
value: 3.344
- type: precision_at_5
value: 2.7470000000000003
- type: recall_at_1
value: 2.45
- type: recall_at_10
value: 8.987
- type: recall_at_100
value: 22.302
- type: recall_at_1000
value: 49.903999999999996
- type: recall_at_20
value: 11.712
- type: recall_at_3
value: 4.675
- type: recall_at_5
value: 6.161
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 23.585
- type: map_at_10
value: 31.893
- type: map_at_100
value: 32.851
- type: map_at_1000
value: 32.951
- type: map_at_20
value: 32.415
- type: map_at_3
value: 29.787000000000003
- type: map_at_5
value: 31.012
- type: mrr_at_1
value: 47.171
- type: mrr_at_10
value: 54.333
- type: mrr_at_100
value: 54.949000000000005
- type: mrr_at_1000
value: 54.98800000000001
- type: mrr_at_20
value: 54.702
- type: mrr_at_3
value: 52.632999999999996
- type: mrr_at_5
value: 53.652
- type: ndcg_at_1
value: 47.171
- type: ndcg_at_10
value: 39.884
- type: ndcg_at_100
value: 44.019000000000005
- type: ndcg_at_1000
value: 46.303
- type: ndcg_at_20
value: 41.461999999999996
- type: ndcg_at_3
value: 36.153999999999996
- type: ndcg_at_5
value: 38.072
- type: precision_at_1
value: 47.171
- type: precision_at_10
value: 8.396
- type: precision_at_100
value: 1.169
- type: precision_at_1000
value: 0.147
- type: precision_at_20
value: 4.707
- type: precision_at_3
value: 22.408
- type: precision_at_5
value: 14.966
- type: recall_at_1
value: 23.585
- type: recall_at_10
value: 41.978
- type: recall_at_100
value: 58.447
- type: recall_at_1000
value: 73.7
- type: recall_at_20
value: 47.07
- type: recall_at_3
value: 33.611999999999995
- type: recall_at_5
value: 37.413999999999994
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 74.9528
- type: ap
value: 69.50790744137139
- type: f1
value: 74.77689594327182
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 8.186
- type: map_at_10
value: 13.352
- type: map_at_100
value: 14.147000000000002
- type: map_at_1000
value: 14.231
- type: map_at_20
value: 13.753000000000002
- type: map_at_3
value: 11.529
- type: map_at_5
value: 12.497
- type: mrr_at_1
value: 8.424
- type: mrr_at_10
value: 13.675999999999998
- type: mrr_at_100
value: 14.475999999999999
- type: mrr_at_1000
value: 14.557
- type: mrr_at_20
value: 14.084
- type: mrr_at_3
value: 11.843
- type: mrr_at_5
value: 12.82
- type: ndcg_at_1
value: 8.424
- type: ndcg_at_10
value: 16.534
- type: ndcg_at_100
value: 20.982
- type: ndcg_at_1000
value: 23.538999999999998
- type: ndcg_at_20
value: 18.012
- type: ndcg_at_3
value: 12.729
- type: ndcg_at_5
value: 14.466999999999999
- type: precision_at_1
value: 8.424
- type: precision_at_10
value: 2.7449999999999997
- type: precision_at_100
value: 0.507
- type: precision_at_1000
value: 0.073
- type: precision_at_20
value: 1.683
- type: precision_at_3
value: 5.478000000000001
- type: precision_at_5
value: 4.16
- type: recall_at_1
value: 8.186
- type: recall_at_10
value: 26.415
- type: recall_at_100
value: 48.282000000000004
- type: recall_at_1000
value: 68.869
- type: recall_at_20
value: 32.207
- type: recall_at_3
value: 15.909
- type: recall_at_5
value: 20.09
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 87.26858185134519
- type: f1
value: 86.73793752046078
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 54.65800273597811
- type: f1
value: 36.16413360524473
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 61.519838601210495
- type: f1
value: 58.35755839392156
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 67.04102219233357
- type: f1
value: 65.55523696441647
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 27.16765056253893
- type: v_measures
value:
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- 0.2535665532592405
- 0.25745435154373697
- 0.2588139996653209
- 0.2563977645588755
- 0.2572790917147801
- 0.28011260965698515
- 0.28489569719921415
- 0.2978121202496781
- 0.2927319740642704
- 0.27770089434179124
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 23.778196508186724
- type: v_measures
value:
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- 0.22243646306633857
- 0.2203410753173429
- 0.2227543188103344
- 0.22414069966133132
- 0.2284479943649894
- 0.2523527902057292
- 0.25535019508635054
- 0.25480623149347
- 0.2575581979609686
- 0.23963168485181752
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.088514713666076
- type: mrr
value: 31.010218178449588
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 2.228
- type: map_at_10
value: 4.338
- type: map_at_100
value: 5.427
- type: map_at_1000
value: 6.325
- type: map_at_20
value: 4.729
- type: map_at_3
value: 3.495
- type: map_at_5
value: 3.8150000000000004
- type: mrr_at_1
value: 22.291
- type: mrr_at_10
value: 29.622
- type: mrr_at_100
value: 30.547
- type: mrr_at_1000
value: 30.618000000000002
- type: mrr_at_20
value: 30.070000000000004
- type: mrr_at_3
value: 27.141
- type: mrr_at_5
value: 28.488000000000003
- type: ndcg_at_1
value: 21.362000000000002
- type: ndcg_at_10
value: 15.64
- type: ndcg_at_100
value: 14.832
- type: ndcg_at_1000
value: 23.980999999999998
- type: ndcg_at_20
value: 14.408000000000001
- type: ndcg_at_3
value: 18.719
- type: ndcg_at_5
value: 17.137
- type: precision_at_1
value: 21.981
- type: precision_at_10
value: 11.548
- type: precision_at_100
value: 4.223
- type: precision_at_1000
value: 1.6500000000000001
- type: precision_at_20
value: 8.39
- type: precision_at_3
value: 17.337
- type: precision_at_5
value: 14.613000000000001
- type: recall_at_1
value: 2.228
- type: recall_at_10
value: 6.9190000000000005
- type: recall_at_100
value: 16.854
- type: recall_at_1000
value: 49.179
- type: recall_at_20
value: 9.166
- type: recall_at_3
value: 4.263
- type: recall_at_5
value: 4.956
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 9.176
- type: map_at_10
value: 15.720999999999998
- type: map_at_100
value: 16.847
- type: map_at_1000
value: 16.939999999999998
- type: map_at_20
value: 16.355
- type: map_at_3
value: 13.402
- type: map_at_5
value: 14.663
- type: mrr_at_1
value: 10.458
- type: mrr_at_10
value: 17.413
- type: mrr_at_100
value: 18.442
- type: mrr_at_1000
value: 18.52
- type: mrr_at_20
value: 18.006
- type: mrr_at_3
value: 15.043999999999999
- type: mrr_at_5
value: 16.367
- type: ndcg_at_1
value: 10.458
- type: ndcg_at_10
value: 19.994999999999997
- type: ndcg_at_100
value: 25.665
- type: ndcg_at_1000
value: 28.277
- type: ndcg_at_20
value: 22.233
- type: ndcg_at_3
value: 15.168999999999999
- type: ndcg_at_5
value: 17.453
- type: precision_at_1
value: 10.458
- type: precision_at_10
value: 3.711
- type: precision_at_100
value: 0.697
- type: precision_at_1000
value: 0.095
- type: precision_at_20
value: 2.3810000000000002
- type: precision_at_3
value: 7.204000000000001
- type: precision_at_5
value: 5.568
- type: recall_at_1
value: 9.176
- type: recall_at_10
value: 31.646
- type: recall_at_100
value: 57.865
- type: recall_at_1000
value: 78.11399999999999
- type: recall_at_20
value: 40.117000000000004
- type: recall_at_3
value: 18.67
- type: recall_at_5
value: 24.063000000000002
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 62.597
- type: map_at_10
value: 75.3
- type: map_at_100
value: 76.057
- type: map_at_1000
value: 76.089
- type: map_at_20
value: 75.762
- type: map_at_3
value: 72.41499999999999
- type: map_at_5
value: 74.139
- type: mrr_at_1
value: 72.11999999999999
- type: mrr_at_10
value: 79.44600000000001
- type: mrr_at_100
value: 79.691
- type: mrr_at_1000
value: 79.696
- type: mrr_at_20
value: 79.604
- type: mrr_at_3
value: 78.015
- type: mrr_at_5
value: 78.90700000000001
- type: ndcg_at_1
value: 72.15
- type: ndcg_at_10
value: 79.937
- type: ndcg_at_100
value: 82.074
- type: ndcg_at_1000
value: 82.443
- type: ndcg_at_20
value: 80.916
- type: ndcg_at_3
value: 76.452
- type: ndcg_at_5
value: 78.192
- type: precision_at_1
value: 72.15
- type: precision_at_10
value: 12.117
- type: precision_at_100
value: 1.4500000000000002
- type: precision_at_1000
value: 0.154
- type: precision_at_20
value: 6.503
- type: precision_at_3
value: 33.267
- type: precision_at_5
value: 21.944
- type: recall_at_1
value: 62.597
- type: recall_at_10
value: 88.911
- type: recall_at_100
value: 97.112
- type: recall_at_1000
value: 99.229
- type: recall_at_20
value: 92.231
- type: recall_at_3
value: 78.83099999999999
- type: recall_at_5
value: 83.757
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 31.453135224292588
- type: v_measures
value:
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- 0.34024081488556046
- 0.31978719363198366
- 0.28326863670514296
- 0.2736227852661663
- 0.33176589594215805
- 0.281739297860462
- 0.3714152055541526
- 0.2784460528138246
- 0.28292867038320446
- 0.3011498262585792
- 0.2903236549747166
- 0.36937775233378656
- 0.30011371483471927
- 0.33579158840067747
- 0.3774325279364799
- 0.2798489399988548
- 0.30350039884840657
- 0.39379070544611877
- 0.29845537391174287
- 0.280224383799162
- 0.2683644031255058
- 0.28462417081553165
- 0.4207860651822375
- 0.30599639335371903
- 0.29028935381025356
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 43.69122416835423
- type: v_measures
value:
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- 0.4949442160711536
- 0.5089714608477952
- 0.533056646726052
- 0.28870974397114113
- 0.4845435888947718
- 0.4358272686082502
- 0.15963756448560423
- 0.4966594103138184
- 0.4483975331373559
- 0.5183749837794799
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 2.558
- type: map_at_10
value: 5.4670000000000005
- type: map_at_100
value: 6.601999999999999
- type: map_at_1000
value: 6.816
- type: map_at_20
value: 6.013
- type: map_at_3
value: 4.132000000000001
- type: map_at_5
value: 4.672
- type: mrr_at_1
value: 12.5
- type: mrr_at_10
value: 18.454
- type: mrr_at_100
value: 19.585
- type: mrr_at_1000
value: 19.698999999999998
- type: mrr_at_20
value: 19.093
- type: mrr_at_3
value: 16.25
- type: mrr_at_5
value: 17.349999999999998
- type: ndcg_at_1
value: 12.5
- type: ndcg_at_10
value: 9.931
- type: ndcg_at_100
value: 15.332
- type: ndcg_at_1000
value: 20.285
- type: ndcg_at_20
value: 11.73
- type: ndcg_at_3
value: 9.425
- type: ndcg_at_5
value: 7.994
- type: precision_at_1
value: 12.5
- type: precision_at_10
value: 5.11
- type: precision_at_100
value: 1.299
- type: precision_at_1000
value: 0.251
- type: precision_at_20
value: 3.5999999999999996
- type: precision_at_3
value: 8.533
- type: precision_at_5
value: 6.7
- type: recall_at_1
value: 2.558
- type: recall_at_10
value: 10.4
- type: recall_at_100
value: 26.35
- type: recall_at_1000
value: 50.888
- type: recall_at_20
value: 14.610000000000001
- type: recall_at_3
value: 5.208
- type: recall_at_5
value: 6.808
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 80.46080544471825
- type: cos_sim_spearman
value: 77.33681018334157
- type: euclidean_pearson
value: 78.32030772877526
- type: euclidean_spearman
value: 77.3367915580176
- type: manhattan_pearson
value: 78.23694581981565
- type: manhattan_spearman
value: 77.24572801084182
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 77.33143319366522
- type: cos_sim_spearman
value: 70.15243619467687
- type: euclidean_pearson
value: 74.35384725257417
- type: euclidean_spearman
value: 70.15020588975051
- type: manhattan_pearson
value: 74.49763893926959
- type: manhattan_spearman
value: 70.35289409088577
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.43426290814391
- type: cos_sim_spearman
value: 78.41580967540904
- type: euclidean_pearson
value: 77.87697798842441
- type: euclidean_spearman
value: 78.41580967540904
- type: manhattan_pearson
value: 77.7742301162175
- type: manhattan_spearman
value: 78.23561925777014
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 75.72059066580607
- type: cos_sim_spearman
value: 74.76063270848232
- type: euclidean_pearson
value: 75.96422568212527
- type: euclidean_spearman
value: 74.76063912580608
- type: manhattan_pearson
value: 75.93446446206052
- type: manhattan_spearman
value: 74.80351881324513
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 79.50308070637769
- type: cos_sim_spearman
value: 82.00177922226122
- type: euclidean_pearson
value: 81.88334998600465
- type: euclidean_spearman
value: 82.00175996908672
- type: manhattan_pearson
value: 82.04162815561806
- type: manhattan_spearman
value: 82.16179492395742
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 72.660749090443
- type: cos_sim_spearman
value: 78.27062791462116
- type: euclidean_pearson
value: 77.22132046879575
- type: euclidean_spearman
value: 78.27062749235377
- type: manhattan_pearson
value: 77.30349168561915
- type: manhattan_spearman
value: 78.38610133247218
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 84.40073205259823
- type: cos_sim_spearman
value: 85.85093351857286
- type: euclidean_pearson
value: 86.39555107737667
- type: euclidean_spearman
value: 85.85093351857286
- type: manhattan_pearson
value: 86.15780582794078
- type: manhattan_spearman
value: 85.67768599300385
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 54.06121880120164
- type: cos_sim_spearman
value: 61.20018366762684
- type: euclidean_pearson
value: 59.08089664894604
- type: euclidean_spearman
value: 61.20018366762684
- type: manhattan_pearson
value: 58.88169190353213
- type: manhattan_spearman
value: 60.82629422553597
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 76.9607252955321
- type: cos_sim_spearman
value: 79.20891358738938
- type: euclidean_pearson
value: 79.53044888138301
- type: euclidean_spearman
value: 79.20891358738938
- type: manhattan_pearson
value: 79.37313113618887
- type: manhattan_spearman
value: 79.0667751270519
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 71.0421477784269
- type: mrr
value: 89.94940426312975
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 31.900000000000002
- type: map_at_10
value: 38.494
- type: map_at_100
value: 39.353
- type: map_at_1000
value: 39.427
- type: map_at_20
value: 38.952
- type: map_at_3
value: 36.238
- type: map_at_5
value: 37.36
- type: mrr_at_1
value: 34.0
- type: mrr_at_10
value: 40.327
- type: mrr_at_100
value: 41.052
- type: mrr_at_1000
value: 41.120000000000005
- type: mrr_at_20
value: 40.737
- type: mrr_at_3
value: 38.333
- type: mrr_at_5
value: 39.367000000000004
- type: ndcg_at_1
value: 34.0
- type: ndcg_at_10
value: 42.419000000000004
- type: ndcg_at_100
value: 46.589000000000006
- type: ndcg_at_1000
value: 48.966
- type: ndcg_at_20
value: 43.980000000000004
- type: ndcg_at_3
value: 38.124
- type: ndcg_at_5
value: 39.952
- type: precision_at_1
value: 34.0
- type: precision_at_10
value: 5.933
- type: precision_at_100
value: 0.8330000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_20
value: 3.3329999999999997
- type: precision_at_3
value: 15.0
- type: precision_at_5
value: 10.067
- type: recall_at_1
value: 31.900000000000002
- type: recall_at_10
value: 52.800000000000004
- type: recall_at_100
value: 72.10600000000001
- type: recall_at_1000
value: 91.60000000000001
- type: recall_at_20
value: 58.699999999999996
- type: recall_at_3
value: 41.317
- type: recall_at_5
value: 45.761
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.62871287128714
- type: cos_sim_ap
value: 85.22434241429664
- type: cos_sim_f1
value: 79.31605074462217
- type: cos_sim_precision
value: 88.43788437884379
- type: cos_sim_recall
value: 71.89999999999999
- type: dot_accuracy
value: 99.62871287128714
- type: dot_ap
value: 85.22434241429666
- type: dot_f1
value: 79.31605074462217
- type: dot_precision
value: 88.43788437884379
- type: dot_recall
value: 71.89999999999999
- type: euclidean_accuracy
value: 99.62871287128714
- type: euclidean_ap
value: 85.22434237736961
- type: euclidean_f1
value: 79.31605074462217
- type: euclidean_precision
value: 88.43788437884379
- type: euclidean_recall
value: 71.89999999999999
- type: manhattan_accuracy
value: 99.62475247524752
- type: manhattan_ap
value: 85.53918872229502
- type: manhattan_f1
value: 79.38618925831203
- type: manhattan_precision
value: 81.2565445026178
- type: manhattan_recall
value: 77.60000000000001
- type: max_accuracy
value: 99.62871287128714
- type: max_ap
value: 85.53918872229502
- type: max_f1
value: 79.38618925831203
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 39.16142357597941
- type: v_measures
value:
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- 0.3824405761636396
- 0.44216202123263126
- 0.3390286805950001
- 0.40370202650437953
- 0.3687764786128344
- 0.3002689364743748
- 0.3406756129607103
- 0.4239251906201308
- 0.41513537797197647
- 0.39562333880392536
- 0.44243846336620263
- 0.4564014124962121
- 0.46843968839295613
- 0.3486700249457605
- 0.3931094737880025
- 0.38614031871714743
- 0.39009948062151834
- 0.3952861715088528
- 0.3768164106667065
- 0.39372559829701875
- 0.41022022885425324
- 0.3442845107165114
- 0.36768421400456974
- 0.40522290066464794
- 0.40007875701488965
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 29.175984546605825
- type: v_measures
value:
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- 0.28319515044921223
- 0.2715264094552343
- 0.27440620100214314
- 0.26830955555466396
- 0.27653185247970546
- 0.3178752664718975
- 0.3080336049306678
- 0.3068022206397505
- 0.3022010188359171
- 0.3087171748413907
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 40.56760857818254
- type: mrr
value: 40.94357439945675
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.764610926778037
- type: cos_sim_spearman
value: 30.298920879214158
- type: dot_pearson
value: 30.764611831321552
- type: dot_spearman
value: 30.298299440561465
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.109
- type: map_at_10
value: 0.781
- type: map_at_100
value: 2.995
- type: map_at_1000
value: 6.854
- type: map_at_20
value: 1.2
- type: map_at_3
value: 0.28700000000000003
- type: map_at_5
value: 0.434
- type: mrr_at_1
value: 42.0
- type: mrr_at_10
value: 54.955
- type: mrr_at_100
value: 55.655
- type: mrr_at_1000
value: 55.689
- type: mrr_at_20
value: 55.42399999999999
- type: mrr_at_3
value: 51.0
- type: mrr_at_5
value: 53.800000000000004
- type: ndcg_at_1
value: 39.0
- type: ndcg_at_10
value: 39.479
- type: ndcg_at_100
value: 25.752000000000002
- type: ndcg_at_1000
value: 22.868
- type: ndcg_at_20
value: 35.707
- type: ndcg_at_3
value: 39.419
- type: ndcg_at_5
value: 39.64
- type: precision_at_1
value: 42.0
- type: precision_at_10
value: 43.6
- type: precision_at_100
value: 25.88
- type: precision_at_1000
value: 10.784
- type: precision_at_20
value: 37.8
- type: precision_at_3
value: 43.333
- type: precision_at_5
value: 43.6
- type: recall_at_1
value: 0.109
- type: recall_at_10
value: 1.038
- type: recall_at_100
value: 5.495
- type: recall_at_1000
value: 21.665
- type: recall_at_20
value: 1.722
- type: recall_at_3
value: 0.318
- type: recall_at_5
value: 0.522
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 1.302
- type: map_at_10
value: 2.514
- type: map_at_100
value: 3.341
- type: map_at_1000
value: 3.757
- type: map_at_20
value: 2.85
- type: map_at_3
value: 1.8450000000000002
- type: map_at_5
value: 1.873
- type: mrr_at_1
value: 18.367
- type: mrr_at_10
value: 24.789
- type: mrr_at_100
value: 26.517000000000003
- type: mrr_at_1000
value: 26.593
- type: mrr_at_20
value: 25.946
- type: mrr_at_3
value: 22.448999999999998
- type: mrr_at_5
value: 22.959
- type: ndcg_at_1
value: 16.326999999999998
- type: ndcg_at_10
value: 7.7509999999999994
- type: ndcg_at_100
value: 10.67
- type: ndcg_at_1000
value: 17.76
- type: ndcg_at_20
value: 7.674
- type: ndcg_at_3
value: 10.369
- type: ndcg_at_5
value: 7.840999999999999
- type: precision_at_1
value: 18.367
- type: precision_at_10
value: 7.142999999999999
- type: precision_at_100
value: 2.327
- type: precision_at_1000
value: 0.6779999999999999
- type: precision_at_20
value: 5.408
- type: precision_at_3
value: 11.565
- type: precision_at_5
value: 7.3469999999999995
- type: recall_at_1
value: 1.302
- type: recall_at_10
value: 4.919
- type: recall_at_100
value: 14.430000000000001
- type: recall_at_1000
value: 36.949
- type: recall_at_20
value: 7.0040000000000004
- type: recall_at_3
value: 2.2319999999999998
- type: recall_at_5
value: 2.3449999999999998
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 64.47265625
- type: ap
value: 11.979631561643862
- type: f1
value: 49.90647543589666
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 61.79966044142614
- type: f1
value: 61.89030508018869
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 28.234217666259703
- type: v_measures
value:
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- 0.29450695840941515
- 0.30590470809304793
- 0.29205899710992034
- 0.27123807357354457
- 0.28092608890535714
- 0.2787486406145347
- 0.26689540227394454
- 0.26139744229328293
- 0.2785944239497992
- 0.2931510314031239
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 84.0317100792752
- type: cos_sim_ap
value: 67.56361271781817
- type: cos_sim_f1
value: 63.082081211970696
- type: cos_sim_precision
value: 59.58245367112362
- type: cos_sim_recall
value: 67.01846965699208
- type: dot_accuracy
value: 84.0317100792752
- type: dot_ap
value: 67.56359342938897
- type: dot_f1
value: 63.082081211970696
- type: dot_precision
value: 59.58245367112362
- type: dot_recall
value: 67.01846965699208
- type: euclidean_accuracy
value: 84.0317100792752
- type: euclidean_ap
value: 67.5636169518733
- type: euclidean_f1
value: 63.082081211970696
- type: euclidean_precision
value: 59.58245367112362
- type: euclidean_recall
value: 67.01846965699208
- type: manhattan_accuracy
value: 84.0734338677952
- type: manhattan_ap
value: 67.44969672020721
- type: manhattan_f1
value: 63.09479205695017
- type: manhattan_precision
value: 59.90040313018734
- type: manhattan_recall
value: 66.64907651715039
- type: max_accuracy
value: 84.0734338677952
- type: max_ap
value: 67.5636169518733
- type: max_f1
value: 63.09479205695017
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 87.60624054022587
- type: cos_sim_ap
value: 82.94451598409692
- type: cos_sim_f1
value: 74.76484194294527
- type: cos_sim_precision
value: 74.86874613959235
- type: cos_sim_recall
value: 74.66122574684324
- type: dot_accuracy
value: 87.60624054022587
- type: dot_ap
value: 82.94451133280317
- type: dot_f1
value: 74.76484194294527
- type: dot_precision
value: 74.86874613959235
- type: dot_recall
value: 74.66122574684324
- type: euclidean_accuracy
value: 87.60624054022587
- type: euclidean_ap
value: 82.94449586426977
- type: euclidean_f1
value: 74.76484194294527
- type: euclidean_precision
value: 74.86874613959235
- type: euclidean_recall
value: 74.66122574684324
- type: manhattan_accuracy
value: 87.63922847052432
- type: manhattan_ap
value: 82.9449637573502
- type: manhattan_f1
value: 74.9452996046217
- type: manhattan_precision
value: 74.73015386970833
- type: manhattan_recall
value: 75.1616877117339
- type: max_accuracy
value: 87.63922847052432
- type: max_ap
value: 82.9449637573502
- type: max_f1
value: 74.9452996046217
---
# Squirtle
Squirtle is a distill of [bge-base-en-v1.5](BAAI/bge-base-en-v1.5).
## Intended purpose
<span style="color:blue">This model is designed for use in semantic-autocomplete ([click here for demo](https://mihaiii.github.io/semantic-autocomplete/)).</span>
Make sure you also pass `pipelineParams={{ pooling: "cls", normalize: true }}` since the default pooling in the component is mean.
## Usage
Other than within [semantic-autocomplete](https://github.com/Mihaiii/semantic-autocomplete), you can use this model same as [bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5#usage). | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
nasa-impact/nasa-smd-ibm-distil-v0.1 | nasa-impact | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"earth science",
"climate",
"biology",
"en",
"arxiv:2405.10725",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-21T18:41:13 | 2024-10-11T02:14:02 | 35 | 8 | ---
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: fill-mask
tags:
- earth science
- climate
- biology
---
# Model Card for INDUS-Small (nasa-smd-ibm-distil-v0.1)
INDUS-Small(nasa-smd-ibm-distil-v0.1) is a distilled version of the RoBERTa-based, Encoder-only transformer model INDUS (nasa-impact/nasa-smd-ibm-v0.1), domain-adapted for NASA Science Mission Directorate (SMD) applications. It's fine-tuned on scientific journals and articles relevant to NASA SMD, aiming to enhance natural language technologies like information retrieval and intelligent search.
We trained the smaller model, INDUS_SMALL, with 38M parameters through knowledge distillation techniques by using INDUS as the teacher. INDUS_SMALL follows a 4-layer architecture recommended by the Neural Architecture Search engine (Trivedi et al., 2023) with an optimal trade-off between performance and latency. We adopted the distillation objective proposed in MiniLMv2 (Wang et al., 2021) to transfer fine-grained self-attention relations, which has been shown to be the current state-of-the-art (Udagawa et al., 2023). Using this objective, we trained the model for 500K steps with an effective batch size of 480 on 30 V100 GPUs.
## Model Details
- **Base Model**: INDUS
- **Tokenizer**: Custom
- **Original version Parameters**: 125M
- **Pretraining Strategy**: Masked Language Modeling (MLM)
- **Distilled Version Parameters**: 38 Million Parameters
## Training Data
- Wikipedia English (Feb 1, 2020)
- AGU Publications
- AMS Publications
- Scientific papers from Astrophysics Data Systems (ADS)
- PubMed abstracts
- PubMedCentral (PMC) (commercial license subset)

## Training Procedure
- **Framework**: fairseq 0.12.1 with PyTorch 1.9.1
- **transformers Version**: 4.2.0
- **Strategy**: Masked Language Modeling (MLM)
## Evaluation
### BLURB benchmark

(Standard deviation across 10 random seeds in parenthesis. Macro avg. reported across datasets and micro avg. computed by averaging scores on each task then averaging across task averages.)
### Climate Change NER, and NASA-QA benchmark

(Climate Change NER and NASA-QA benchmark results. Standard Deviation over multiple runs given in parantheses)
Please refer to the following dataset cards for further benchmarks and evaluation
- NASA-IR Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark
- NASA-QA Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-qa-benchmark
- Climate Change NER Benchmark - https://huggingface.co/datasets/ibm/Climate-Change-NER
Please refer to the following dataset cards for benchmark evaluation
- NASA IR Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-IR-benchmark
- NASA SMD Expert QA Benchmark - https://huggingface.co/datasets/nasa-impact/nasa-smd-qa-benchmark
- Climate CHange Benchmark - https://huggingface.co/datasets/ibm/Climate-Change-NER
## Uses
- Named Entity Recognition (NER)
- Information Retrieval
- Sentence Transformers
- Extractive QA
For NASA SMD related, scientific usecases.
## Note
This Model is released in support of the training and evaluation of the encoder language model ["Indus"](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1).
Accompanying paper can be found here: https://arxiv.org/abs/2405.10725
## Citation
If you find this work useful, please cite using the following bibtex citation:
```bibtex
@misc {nasa-impact_2023,
author = {Masayasu Maraoka and Bishwaranjan Bhattacharjee and Muthukumaran Ramasubramanian and Ikhsa Gurung and Rahul Ramachandran and Manil Maskey and Kaylin Bugbee and Rong Zhang and Yousef El Kurdi and Bharath Dandala and Mike Little and Elizabeth Fancher and Lauren Sanders and Sylvain Costes and Sergi Blanco-Cuaresma and Kelly Lockhart and Thomas Allen and Felix Grazes and Megan Ansdell and Alberto Accomazzi and Sanaz Vahidinia and Ryan McGranaghan and Armin Mehrabian and Tsendgar Lee},
title = { nasa-smd-ibm-v0.1 (Revision f01d42f) },
year = 2023,
url = { https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1 },
doi = { 10.57967/hf/1429 },
publisher = { Hugging Face }
}
```
## Attribution
IBM Research
- Masayasu Muraoka
- Bishwaranjan Bhattacharjee
- Rong Zhang
- Yousef El Kurdi
- Bharath Dandala
NASA SMD
- Muthukumaran Ramasubramanian
- Iksha Gurung
- Rahul Ramachandran
- Manil Maskey
- Kaylin Bugbee
- Mike Little
- Elizabeth Fancher
- Lauren Sanders
- Sylvain Costes
- Sergi Blanco-Cuaresma
- Kelly Lockhart
- Thomas Allen
- Felix Grazes
- Megan Ansdell
- Alberto Accomazzi
- Sanaz Vahidinia
- Ryan McGranaghan
- Armin Mehrabian
- Tsendgar Lee
## Disclaimer
This Encoder-only model is currently in an experimental phase. We are working to improve the model's capabilities and performance, and as we progress, we invite the community to engage with this model, provide feedback, and contribute to its evolution.
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"BLURB"
] |
EIRTHAIMED/Llama-3.1-EIRAI-8B-Prob | EIRTHAIMED | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"text-generation-inference",
"llama-3.1",
"finetuning",
"conversational",
"th",
"en",
"arxiv:2409.08523",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-09-09T04:12:49 | 2024-09-16T06:58:52 | 35 | 1 | ---
base_model: meta-llama/Meta-Llama-3.1-8B
language:
- th
- en
library_name: transformers
license: llama3.1
tags:
- medical
- text-generation-inference
- llama-3.1
- finetuning
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66bf1cd096583c59b024a3c5/oG16EyLMfyiqvXrbNPGZd.png" alt="Logo_Website" width="400"/>
</p>
# **Thai Medical Large Language Model**
**Github** : [Github Evaluate](https://github.com/EIRAI-Thaimedical/EIRAI)<br>
**PaPer** : <br>
## **Llama-3.1-EIRAI-8B-instruct**
**Llama-3.1-EIRAI-8B-instruct**: developed an **8-billion parameter model** specifically tailored for **Thai medical applications**, with expertise in both **Thai medical language** and **English medical terminology**. The model has demonstrated its capabilities through key benchmarks such as **MMLU**, **MedQA**, **PubMedQA**, and **MedMCQA**, as well as Thai language assessments like **ThaiExam**, **M3Exam**, **XNLI**, and **XCOPA**. Additionally, we have created a **Clinically Adapted Model Enhanced test** using the **Thai language** to support **clinical use in hospitals** and to further improve the performance of **Thai medical Retrieval-Augmented Generation (RAG)**.
## Notice
While **Eir AI Thai Medical LLM** is designed to encode high-quality medical knowledge, it is **not yet optimized for safe, practical use** in real-world medical settings. The model is still in the research phase and should **not be used for clinical decision-making** without further validation, including randomized controlled trials. It is available for researchers to explore the potential of LLMs in medical contexts, but **real-world deployment is not recommended** in its current version.
## Safety and Future Work
The current version of **Eir AI Thai Medical LLM** is under active development. We advise against using it for medical applications until further testing is completed. Our goal is to continue enhancing the model through **rigorous testing** and **real-world evaluation**, ensuring that it can be safely integrated into healthcare systems in the future.
## Model Overview
- **Model Architecture:** Meta-Llama-3.1-8B-Instruct
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
### Evaluations
| Medical Model | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | PubMedQA | MedMCQA | Avg. |
|--------------------------|---------------------|---------------------|--------------------|--------------------|--------------------|--------------------|-------------------|-------------------|-------------------|-------------------|
| **GPT-3.5 Turbo 1106** | 74.7 | 60.2 | 65.9 | 72.0 | 64.73 | 64.73 | 57.71 | 72.66 | 66.0 | 66.6 |
|Thai LLMs | | | | | | | | | | |
| **Eir AI-8B** | 75.1 | 80.0 | 69.6 | 76.8 | 77.1 | 66.5 | 64.5 | **79.0** | 58.6 | 71.9 |
| **Eir AI-8B + Prob** | **83.8** | **89.0** | **83.0** | **84.9** | **89.6** | **75.7** | **69.6** | 78.8 | **67.1** | **80.2** |
| **Typhoon-v1.5x-8B** | 75.9 | 79.0 | 63.7 | 70.6 | 77.1 | 63.6 | 59.7 | 74.4 | 58.0 | 69.1 |
| **OpenThaiGPT-beta-7B** | 37.4 | 38.0 | 4.5 | 32.7 | 36.1 | 32.4 | 32.4 | 62.0 | 31.8 | 34.1 |
## Translation Performance Metrics
| **Model** | **BLEU Score** | **N-gram Precisions (%)** | **BP** | **Ratio** |
|-------------------------------|----------------|---------------------------------|---------|-----------|
| Typhoon-v1.5x-8B-Instruct | 34.42 | 71.3/50.6/38.6/29.6 | 0.764 | 0.788 |
| Meta Llama 3.1-8B Instruct | 35.74 | 62.8/42.3/31.7/24.1 | 0.946 | 0.948 |
| **Eir AI-8B** | **61.10** | **76.1/64.6/56.6/50.1** | **1.000**| **1.006** |
| Eir AI-8B-prob | 47.91 | 74.0/58.0/48.2/40.6 | 0.890 | 0.896 |
## Clinically Adapted Thai Medical Task Performance
| Task | GPT-3.5 | Typhoon-v1.5x-8B-instruct | GPT-4o | Eir AI-8B |
|----------------------------------------|---------|----------------------------|--------|-----------|
| Named Entity Recognition | 3.26 | 5.55 | 6.34 | **7.08** |
| Temporal Information Extraction | 3.83 | 5.46 | 6.15 | **7.05** |
| Paraphrasing | 2.36 | 4.68 | 6.35 | **7.06** |
| Natural Language Generation | 2.63 | 4.87 | 6.91 | **7.66** |
| Keyword Extraction | 2.60 | 5.15 | 7.01 | **7.35** |
| Text Classification | 2.92 | 6.21 | 5.36 | **6.75** |
| Relation Extraction | 3.29 | 5.94 | 4.37 | **6.92** |
| Question Answering | 3.70 | 4.92 | 6.11 | **6.82** |
| Text Summarization | 2.98 | 5.44 | **7.51**| **7.51** |
| Abbreviation Expansion | 3.99 | 5.96 | 6.24 | **7.82** |
| Clinical Concept Normalization | 2.67 | 5.63 | 5.82 | **6.55** |
| Open-ended Question | 3.32 | 5.55 | 6.77 | **7.27** |
| Multiple-Choice Question | 3.90 | 5.00 | 5.40 | **6.40** |
| Coreference Resolution | 3.48 | 4.55 | 4.88 | **6.43** |
| Yes/No Question | 2.71 | 5.86 | 4.86 | **7.38** |
| Medical Translation | 3.00 | 4.00 | **7.79**| 7.65 |
| Medical Thai Extraction | 2.81 | 7.16 | **8.62**| 8.16 |
| Medical ICD Prediction | 2.08 | 3.16 | **8.12**| 6.41 |
| **Average Score** | 3.05 | 5.33 | 6.38 | **7.11** |
# Prompt Template
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
````
# Example Clinical Adapted ICD 10 Prediction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are responsible for accurately assigning ICD-10 codes and to diagnose and document medical records.
Your expertise ensures that healthcare providers are properly reimbursed and that patient care is well-documented.
In this scenario, you will be presented with a series of medical records and your task is to provide the correct ICD-10 code(s)
and ICD-9 CM in procedures based on the information provided.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
"Chief Complaint :5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว \r\n
Present illness : 5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว มีน้ำมูก เลือดกำเดาจาากข้างขวา
ปฏิการกระทบกระแทก ไม่มีเจ็บคอ ไม่มีอาการอ่อนเพลีย มีอาการอ่อนเพลีย ไอมาก ไอตลอด มีอาการระคายคอ ปัสสาวะปกติ ไม่มีถ่ายเหลว
\r\n\r\nAllergy : |\r\n\r\nOther : no underlying disease\r\n\r\nPlan Treatment Day 1 of hospitalization : admit ward
\r\n\r\nReview of System { \r\n\r\n General :a thai adult female ,look sickness fatigue dry lip moderate dehydration
\r\n Skin :no MP rash \r\n Eyes :not pale ,no icteric sclera \r\n Chest :secretion sound in both lung ,no crepitation , no wheezing \r
\n }
VitalSign First : {\n
BP : 117.0/63.0 mmHg\n
Pulse : 62.0 BPm\n
Temperature : 37.0 Celsius\n
Respiratory rate : 20.0\n
Weight : 50.000 kgs.\n
Height : 165.0 cm.\n
Painscore: N/A\n
O2SAT : 100\n}\n
Lab Results: \n
Electrolyte:Sodium (Na), Result : 143 mmol/L\r\n
Electrolyte:Potassium (K),Result : 3.8 mmol/L\r\n
Electrolyte:Chloride (Cl), Result : 108 mmol/L\r\n
Electrolyte:Bicarbonate (CO2),Result : 27.0 mmol/L\r\n
Creatinine (Serum):Creatinine, Result : 0.69 mg/dL\r\n
Creatinine (Serum):eGFR,Result : 100.41 ml/min/1.73 m^2\r\n
AST/SGOT:AST/SGOT, Result : 48 U/L\r\n
ALT/SGPT:ALT/SGPT, Result : 42 U/L\r\n
CBC:WBC Count,Result : 3.2 10^3/uL\r\n
CBC:RBC Count, Result : 3.57 10^6/uL\r\n
CBC:Hemoglobin (Hb), Result : 10.7 g/dL\r\n
CBC:Hematocrit (HCT),Result : 32.4 %\r\n
CBC:MCV, Result : 91 fL\r\n
CBC:MCH, Result : 30.0 pg\r\n
CBC:MCHC, Result : 33.0 g/dL\r\n
CBC:RDW-CV,Result : 12.9 %\r\n
CBC:Platelet Count, Result : 226 10^3/uL\r\n
CBC:Platelet Estimates, Result : Adequate \r\n
CBC:Neutrophils,Result : 41 %\r\n
CBC:Lymphocytes, Result : 46 %\r\n
CBC:Monocytes, Result : 9 %\r\n
CBC:Eosinophils, Result : 4 %\r\n
CBC:Basophils,Result : 0 %\r\n
COVID-19 Antigen (Rapid Test) For Admit:COVID-19 Antigen (Rapid Test), Result : Positive \r\n
Influenza A/B screening:Flu A,Result : Negative \r\n
Influenza A/B screening:Flu B, Result : Negative \n
X-ray Results: 15-05-2567 12:56 CHEST (PA, UPRIGHT)\r\n
\r\nHISTORY: Fever with productive cough and dyspnea for 5 days. \r\nCOMPARISON: None.\r\n
\r\n
FINDINGS:\r\n
TUBES/LINES: None.\r\nLUNGS: Normal lung volumes. Band-like opacity at right apical lung. Clear left lung.\r\n
PLEURA: No pneumothorax or effusion. Mild bilateral apical pleural thickening. \r\n
HEART AND MEDIASTINUM: Normal cardiac silhouette. Normal mediastinal contour.\r\n
BONY STRUCTURES: Mild scoliosis of the thoracic spine. \r\n\r\n
IMPRESSION:\r\n
- Band-like opacity at right apical lung, probably plate atelectasis or fibrosis\r\n- Mild bilateral apical pleural thickening"
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Let's analyze the provided information and document the appropriate ICD-10 codes based on the patient's symptoms and diagnosis.
### Step-by-Step Analysis:
1. **Understand the Symptoms and Diagnosis**:
- The patient presents with a range of symptoms including fever, cough with sputum, fatigue, nasal discharge (including blood),
and dyspnea. The lab tests returned a positive COVID-19 antigen test, while the influenza test was negative.
2. **Identify Key Terms**:
- Primary key terms include 'COVID-19 positive', 'fever', 'productive cough', 'dyspnea', and pulmonary findings from the X-ray.
3. **Locate the Main Category**:
- The main categories for the conditions noted are:
- COVID-19: U07.1
- Possible acute bronchitis: J20 (based on the productive cough)
- Other symptoms such as dizziness: R42
- Evidence of acute respiratory infection: J06.9
4. **Identify Specific Codes**:
- Based on the diagnosis:
- COVID-19 confirmed case: U07.1
- Acute bronchitis: J20.9 (considering unspecified acute bronchitis)
- Dizziness (if noted as significant and problematic): R42
- Unspecified acute upper respiratory infection: J06.9
5. **Check for Additional Codes or Modifiers**:
- Considering the details provided, the primary diagnosis is the confirmed COVID-19 infection, while bronchiolitis
and dizziness might be secondary diagnoses.
6. **Validate the Code Selection**:
- Confirm that the selected codes reflect the patient's symptoms accurately, including acute bronchitis
and any other significant symptoms presented.
7. **Document the ICD-10 Code**:
- The ICD-10 codes are documented for all noted conditions accordingly."
```json
Answer: {
"Primary_Diagnosis": {
"Code": "U071",
"Description": "COVID-19, virus identified"
},
"Secondary_Diagnosis": [
{
"Code": "J20",
"Description": "Acute bronchitis due to unspecified organism"
},
{
"Code": "R42",
"Description": "Dizziness and giddiness"
},
{
"Code": "J069",
"Description": "Acute upper respiratory infection, unspecified"
}
],
"ICD9_Procedure": []
}
````
# Example Clinical Adapted Thai Medical Extraction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Task : Extract input the following patient information into output format Tone: the following medical text into
Thai in a fluent and elegant style.
Output Format.1.Age: \n2.Gender: \n3.Weight :\n4.Height : \n5.Chief Complaint: \n6.Symptoms and Signs: \n7.Medical History: \n
8.Current Medications: \n9.Laboratory Results: \n10.Imaging Findings: \n11.Allergy: \n12.Drug Allergy:
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
ผู้ป่วยของเราเป็นชายถนัดทั้งสองมือ อายุ 43 ปี มีประวัติการชักที่ไม่สามารถควบคุมได้มาเป็นเวลา 20 ปี ลักษณะการชักของเขามักจะรวมถึงการรู้สึกร้อนวูบวาบและอาการทางประสาทสัมผัสอื่น ๆ
ที่พัฒนาไปสู่การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา การตรวจหาสาเหตุของการชักรวมถึงการถ่ายภาพด้วยคลื่นแม่เหล็กไฟฟ้า (MRI) ซึ่งเผยให้เห็นเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง
การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน และรอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล (glial neoplasm) ตามลักษณะภาพถ่ายทางรังสี
รอยโรคในสมองส่วนหน้าซ้ายด้านหน้าและตรงกลางประกอบด้วยการกลายเป็นหินปูนแบบเป็นก้อนพร้อมการเพิ่มขึ้นของสัญญาณ FLAIR ที่กว้างขวางซึ่งเกี่ยวข้องกับไจรัสซิงกูเลตทั้งสองข้างและสมองส่วนหน้าซ้าย
(รูปที่ ).\n\nการจัดการทางการแพทย์ล้มเหลวในการควบคุมการชักของเขาและเขาถูกส่งต่อเพื่อหาทางเลือกในการรักษาด้วยการผ่าตัด รอยโรคที่เพิ่มขึ้นถูกสังเกตด้วยการถ่ายภาพเพิ่มเติมและขอบเขตของอาการบวมน้ำก็เพิ่มขึ้นด้วย
ความกังวลเกี่ยวกับการพัฒนาเนื้องอกกลีอาลที่เพิ่มขึ้นและการควบคุมการชักที่ไม่ดีทำให้มีการแนะนำให้ทำการผ่าตัด
การตัดสินใจถูกทำขึ้นเพื่อดำเนินการผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนโดยใช้การตรวจคลื่นไฟฟ้าสมองระหว่างการผ่าตัด
(intraoperative electroencephalogram - EEG), การทำแผนที่คอร์ติคอล (cortical mapping) และการตรวจวัดศักย์ไฟฟ้าที่เกิดจากการกระตุ้นประสาทรับความรู้สึก
(somatosensory evoked potentials - SSEP)\n\nตัวอย่างที่ส่งไปตรวจทางพยาธิวิทยาแบบแช่แข็งในระหว่างการผ่าตัดพบว่ามีเส้นใยโรเซนธาล (Rosenthal fibers)
และการกลายเป็นหินปูนแบบเป็นจุดซึ่งคาดว่าจะเป็นเนื้องอกกลีอาล การประเมินทางพยาธิวิทยาแบบถาวรเผยให้เห็นเนื้องอกไขมัน (lipoma) และความผิดปกติของคอร์ติคอลแบบเฉพาะจุด
(focal cortical dysplasia) แบบ Palmini Type IA ในสมองที่อยู่ใกล้เคียง ความผิดปกติเล็กน้อยของโครงสร้างคอร์ติคอลและการเกิดกลีโอซิส (gliosis)
ในเนื้อสมองขาวที่เกี่ยวข้องสามารถเห็นได้ในคราบสีฮีมาโทซิลินและอีโอซิน (hematoxylin and eosin - H&E) และคราบสีโปรตีนกรดกลีอาลไฟบริลลารี (glial fibrillary acidic protein - GFAP)
และการย้อมสีโปรตีนเส้นประสาท (neurofilament protein - NFP) ในบริเวณที่เกิดกลีโอซิสไม่แสดงหลักฐานของเซลล์ประสาทที่ผิดรูป เซลล์ประสาทขนาดใหญ่ หรือเซลล์ลูกโป่ง (รูปที่ ).\n\n
การถ่ายภาพหลังการผ่าตัด (รูปที่ ) แสดงให้เห็นการตัดรอยโรคที่เพิ่มขึ้นใกล้เคียงทั้งหมดในสมองส่วนหน้าซ้ายและไจรัสซิงกูเลต การลดลงอย่างมากของอาการบวมน้ำที่เกิดจากหลอดเลือดถูกสังเกตในภาพ
FLAIR เนื้องอกไขมันที่เส้นกลางสมองไม่เปลี่ยนแปลง\n\nการฟื้นตัวหลังการตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลางรวมถึงอาการอ่อนแรงครึ่งซีก การพูดล่าช้า และการเปลี่ยนแปลงทางอารมณ์
การเปลี่ยนแปลงเหล่านี้ค่อยๆ หายไปและพบว่าผู้ป่วยมีความสมบูรณ์ทางระบบประสาทในการติดตามผลที่คลินิก การควบคุมการชักเกิดขึ้นทันทีหลังการตัด ยาต้านการชักถูกใช้อย่างต่อเนื่องและค่อยๆ
ถูกถอดออกในเดือนต่อมาโดยไม่มีการกลับมาเป็นซ้ำ ผู้ป่วยไม่มีการชักเป็นเวลา 3 ปีหลังการตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
1. อายุ: 43 ปี\n
2. เพศ: ชาย\n
3. น้ำหนัก: ไม่ระบุ\n
4. ส่วนสูง: ไม่ระบุ\n
5. อาการหลัก: การชักที่ไม่สามารถควบคุมได้\n
6. อาการและสัญญาณ: รู้สึกร้อนวูบวาบ, อาการทางประสาทสัมผัส, การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา\n
7. ประวัติการรักษา: การจัดการทางการแพทย์ล้มเหลวในการควบคุมการชัก\n
8. ยาที่ใช้ในปัจจุบัน: ยาต้านการชัก\n9. ผลการตรวจทางห้องปฏิบัติการ: ไม่ระบุ\n
10. ผลการตรวจภาพ: MRI แสดงเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง, การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน, รอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล\n
11. อาการแพ้: ไม่ระบุ\n
12. อาการแพ้ยา: ไม่ระบุ\n
การผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนได้ถูกดำเนินการ
และผลการตรวจทางพยาธิวิทยาเผยให้เห็นว่าเป็นเนื้องอกไขมันและความผิดปกติของคอร์ติคอลแบบเฉพาะจุด (focal cortical dysplasia) แบบ Palmini Type IA
ในสมองที่อยู่ใกล้เคียง การฟื้นตัวหลังการผ่าตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลาง แต่การควบคุมการชักเกิดขึ้นทันทีหลังการผ่าตัดและผู้ป่วยไม่มีการชักเป็นเวลา
3 ปีหลังการผ่าตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ ที่เพิ่มขึ้นของการเพิ่มขึ้นใกล้เคียงที่ไม่เปลี่ยนแปลง.
````
# How to use
```python
Using transformers.pipeline() API
import transformers
import torch
model_id = "EIRTHAIMED/Llama-3.1-EIRAI-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named EIR , developed by EIR Thai Medical LLM. You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "การใช้ clinical tracer มีบทบาทอย่างไรในการพัฒนาคุณภาพการดูแลผู้ป่วย?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])
```
```
@article{EirAI,
title={Eir: Thai Medical Large Language Models},
author={Yutthakorn Thiprak and Rungtam Ngodngamthaweesuk and Songtam Ngodngamtaweesuk, MD},
year={2024},
journal={arXiv preprint arXiv:2409.08523},
url={https://arxiv.org/abs/2409.08523}
}
```
---
**Thank you very much**
Asst.Prof.Dr. Ekapol Chuangsuwanich and Praj Bhargava @Meta Research Engineer, for your valuable endorsement of our preprint paper on arXiv.
**Thank you**
Draft Reviewer Report
[Kullawat Chaowanawatee](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0051) and [Dr. Jakapan Suaboot](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0056) from Prince of Songkla University, Phuket Campus
<br>
Draft Industry Reviewer Report
[Mr. Piyawat Maneenual](https://ieeexplore.ieee.org/author/37086452350) ,Assistant IT Manager ,Thonburi Rajyindee Hospital<br>
| [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"COREFERENCE_RESOLUTION",
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | [
"MEDQA",
"PUBMEDQA"
] |
Mardiyyah/Llama3-OpenBioLLM-8B-GGUF | Mardiyyah | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"autoquant",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:quantized:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"endpoints_compatible",
"region:us"
] | 2024-08-22T09:13:43 | 2024-08-22T10:07:32 | 34 | 1 | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
- autoquant
- gguf
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
NoYo25/BiodivBERT | NoYo25 | token-classification | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"bert-base-cased",
"biodiversity",
"token-classification",
"sequence-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-16T13:02:40 | 2023-07-13T08:51:53 | 33 | 3 | ---
language:
- en
license: apache-2.0
metrics:
- f1
- precision
- recall
- accuracy
tags:
- bert-base-cased
- biodiversity
- token-classification
- sequence-classification
thumbnail: https://www.fusion.uni-jena.de/fusionmedia/fusionpictures/fusion-service/fusion-transp.png?height=383&width=680
citation: 'Abdelmageed, N., Löffler, F., & König-Ries, B. (2023). BiodivBERT: a Pre-Trained
Language Model for the Biodiversity Domain.'
paper: https://ceur-ws.org/Vol-3415/paper-7.pdf
evaluation datasets:
- url: https://doi.org/10.5281/zenodo.6554208
- named entity recognition:
- COPIOUS
- QEMP
- BiodivNER
- LINNAEUS
- Species800
- relation extraction:
- GAD
- EU-ADR
- BiodivRE
- BioRelEx
training_data:
- crawling-keywords:
- biodivers
- genetic diversity
- omic diversity
- phylogenetic diversity
- soil diversity
- population diversity
- species diversity
- ecosystem diversity
- functional diversity
- microbial diversity
- corpora:
- (+Abs) Springer and Elsevier abstracts in the duration of 1990-2020
- (+Abs+Full) Springer and Elsevier abstracts and open access full publication text
in the duration of 1990-2020
pre-training-hyperparams:
- MAX_LEN = 512
- MLM_PROP = 0.15
- num_train_epochs = 3
- per_device_train_batch_size = 16
- per_device_eval_batch_size = 16
- gradient_accumulation_steps = 4
---
# BiodivBERT
## Model description
* BiodivBERT is a domain-specific BERT based cased model for the biodiversity literature.
* It uses the tokenizer from BERTT base cased model.
* BiodivBERT is pre-trained on abstracts and full text from biodiversity literature.
* BiodivBERT is fine-tuned on two down stream tasks for Named Entity Recognition and Relation Extraction in the biodiversity domain.
* Please visit our [GitHub Repo](https://github.com/fusion-jena/BiodivBERT) for more details.
## How to use
* You can use BiodivBERT via huggingface library as follows:
1. Masked Language Model
````
>>> from transformers import AutoTokenizer, AutoModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForMaskedLM.from_pretrained("NoYo25/BiodivBERT")
````
2. Token Classification - Named Entity Recognition
````
>>> from transformers import AutoTokenizer, AutoModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForTokenClassification.from_pretrained("NoYo25/BiodivBERT")
````
3. Sequence Classification - Relation Extraction
````
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
>>> model = AutoModelForSequenceClassification.from_pretrained("NoYo25/BiodivBERT")
````
## Training data
* BiodivBERT is pre-trained on abstracts and full text from biodiversity domain-related publications.
* We used both Elsevier and Springer APIs to crawl such data.
* We covered publications over the duration of 1990-2020.
## Evaluation results
BiodivBERT overperformed both ``BERT_base_cased``, ``biobert_v1.1``, and ``BiLSTM`` as a baseline approach on the down stream tasks. | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION"
] | [
"BIORELEX",
"EU-ADR",
"GAD",
"LINNAEUS"
] |
tomaarsen/span-marker-bert-base-ncbi-disease | tomaarsen | token-classification | [
"span-marker",
"pytorch",
"tensorboard",
"safetensors",
"token-classification",
"ner",
"named-entity-recognition",
"en",
"dataset:ncbi_disease",
"license:apache-2.0",
"model-index",
"region:us"
] | 2023-08-09T13:55:13 | 2023-08-09T16:04:52 | 33 | 6 | ---
datasets:
- ncbi_disease
language:
- en
library_name: span-marker
license: apache-2.0
metrics:
- f1
- recall
- precision
pipeline_tag: token-classification
tags:
- span-marker
- token-classification
- ner
- named-entity-recognition
widget:
- text: X-Linked adrenoleukodystrophy (ALD) is a genetic disease associated with demyelination
of the central nervous system, adrenal insufficiency, and accumulation of very
long chain fatty acids in tissue and body fluids.
example_title: Example 1
- text: Canavan disease is inherited as an autosomal recessive trait that is caused
by the deficiency of aspartoacylase (ASPA).
example_title: Example 2
- text: However, both models lack other frequent DM symptoms including the fibre-type
dependent atrophy, myotonia, cataract and male-infertility.
example_title: Example 3
model-index:
- name: SpanMarker w. bert-base-cased on NCBI Disease by Tom Aarsen
results:
- task:
type: token-classification
name: Named Entity Recognition
dataset:
name: NCBI Disease
type: ncbi_disease
split: test
revision: acd0e6451198d5b615c12356ab6a05fff4610920
metrics:
- type: f1
value: 0.8813
name: F1
- type: precision
value: 0.8661
name: Precision
- type: recall
value: 0.8971
name: Recall
---
# SpanMarker for Disease Named Entity Recognition
This is a [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) model trained on the [ncbi_disease](https://huggingface.co/datasets/ncbi_disease) dataset. In particular, this SpanMarker model uses [bert-base-cased](https://huggingface.co/bert-base-cased) as the underlying encoder. See [train.py](train.py) for the training script.
## Metrics
This model achieves the following results on the testing set:
- Overall Precision: 0.8661
- Overall Recall: 0.8971
- Overall F1: 0.8813
- Overall Accuracy: 0.9837
## Labels
| **Label** | **Examples** |
|-----------|--------------|
| DISEASE | "ataxia-telangiectasia", "T-cell leukaemia", "C5D", "neutrophilic leukocytosis", "pyogenic infection" |
## Usage
To use this model for inference, first install the `span_marker` library:
```bash
pip install span_marker
```
You can then run inference with this model like so:
```python
from span_marker import SpanMarkerModel
# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-ncbi-disease")
# Run inference
entities = model.predict("Canavan disease is inherited as an autosomal recessive trait that is caused by the deficiency of aspartoacylase (ASPA).")
```
See the [SpanMarker](https://github.com/tomaarsen/SpanMarkerNER) repository for documentation and additional information on this library.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.0038 | 1.41 | 300 | 0.0059 | 0.8141 | 0.8579 | 0.8354 | 0.9818 |
| 0.0018 | 2.82 | 600 | 0.0054 | 0.8315 | 0.8720 | 0.8513 | 0.9840 |
### Framework versions
- SpanMarker 1.2.4
- Transformers 4.31.0
- Pytorch 1.13.1+cu117
- Datasets 2.14.3
- Tokenizers 0.13.2
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"NCBI DISEASE"
] |
LoneStriker/OpenBioLLM-Llama3-70B-GGUF | LoneStriker | null | [
"gguf",
"llama-3",
"llama",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-70B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-70B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-04-26T20:56:57 | 2024-04-26T22:23:12 | 33 | 1 | ---
base_model: meta-llama/Meta-Llama-3-70B-Instruct
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
widget:
- example_title: OpenBioLLM-70B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-70B
results: []
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-70B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-70B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-70B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 70 billion parameters, OpenBioLLM-70B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-70B builds upon the powerful foundations of the **Meta-Llama-3-70B-Instruct** and [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-70B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 70 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-70B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [Meta-Llama-3-70B-Instruct](meta-llama/Meta-Llama-3-70B-Instruct)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-70B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-70B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-70B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 8
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-70B demonstrates superior performance compared to larger models, such as GPT-4, Gemini, Meditron-70B, Med-PaLM-1 & Med-PaLM-2 across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 86.06%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B is intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-checkpoints-tmp | bobox | sentence-similarity | [
"sentence-transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:526885",
"loss:GISTEmbedLoss",
"loss:CoSENTLoss",
"loss:OnlineContrastiveLoss",
"loss:MultipleNegativesSymmetricRankingLoss",
"loss:MarginMSELoss",
"en",
"dataset:sentence-transformers/all-nli",
"dataset:sentence-transformers/stsb",
"dataset:tals/vitaminc",
"dataset:nyu-mll/glue",
"dataset:allenai/scitail",
"dataset:sentence-transformers/xsum",
"dataset:sentence-transformers/sentence-compression",
"dataset:allenai/sciq",
"dataset:allenai/qasc",
"dataset:allenai/openbookqa",
"dataset:sentence-transformers/natural-questions",
"dataset:sentence-transformers/trivia-qa",
"dataset:sentence-transformers/quora-duplicates",
"dataset:sentence-transformers/gooaq",
"arxiv:1908.10084",
"arxiv:2402.16829",
"arxiv:2010.02666",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-15T18:21:17 | 2024-06-20T16:28:33 | 33 | 0 | ---
base_model: microsoft/deberta-v3-small
datasets:
- sentence-transformers/all-nli
- sentence-transformers/stsb
- tals/vitaminc
- nyu-mll/glue
- allenai/scitail
- sentence-transformers/xsum
- sentence-transformers/sentence-compression
- allenai/sciq
- allenai/qasc
- allenai/openbookqa
- sentence-transformers/natural-questions
- sentence-transformers/trivia-qa
- sentence-transformers/quora-duplicates
- sentence-transformers/gooaq
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:526885
- loss:GISTEmbedLoss
- loss:CoSENTLoss
- loss:OnlineContrastiveLoss
- loss:MultipleNegativesSymmetricRankingLoss
- loss:MarginMSELoss
widget:
- source_sentence: A man in a Santa Claus costume is sitting on a wooden chair holding
a microphone and a stringed instrument.
sentences:
- The man is is near the ball.
- The man is wearing a costume.
- People are having a picnic.
- source_sentence: A street vendor selling his art.
sentences:
- A man is selling things on the street.
- A woman is walking outside.
- A clown is talking into a microphone.
- source_sentence: A boy looks surly as his father looks at the camera.
sentences:
- a boy looks at his farther
- A dark-haired girl in a spotted shirt is pointing at the picture while sitting
next to a boy wearing a purple shirt and jeans.
- Man and woman stop and chat with each other.
- source_sentence: Which company provided streetcar connections between downtown and
the hospital?
sentences:
- In 1914 developers Billings & Meyering acquired the tract, completed street development,
provided the last of the necessary municipal improvements including water service,
and began marketing the property with fervor.
- The war was fought primarily along the frontiers between New France and the British
colonies, from Virginia in the South to Nova Scotia in the North.
- 'On the basis of CST, Burnet developed a theory of how an immune response is triggered
according to the self/nonself distinction: "self" constituents (constituents of
the body) do not trigger destructive immune responses, while "nonself" entities
(pathogens, an allograft) trigger a destructive immune response.'
- source_sentence: What language did Tesla study while in school?
sentences:
- Because of the complexity of medications including specific indications, effectiveness
of treatment regimens, safety of medications (i.e., drug interactions) and patient
compliance issues (in the hospital and at home) many pharmacists practicing in
hospitals gain more education and training after pharmacy school through a pharmacy
practice residency and sometimes followed by another residency in a specific area.
- Rev. Jimmy Creech was defrocked after a highly publicized church trial in 1999
on account of his participation in same-sex union ceremonies.
- Tesla was the fourth of five children.
model-index:
- name: SentenceTransformer based on microsoft/deberta-v3-small
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: sts test
type: sts-test
metrics:
- type: pearson_cosine
value: 0.2520910673470529
name: Pearson Cosine
- type: spearman_cosine
value: 0.2588662067006675
name: Spearman Cosine
- type: pearson_manhattan
value: 0.30439718484055006
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.3013780326567434
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.25977707672353506
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.26078444276128726
name: Spearman Euclidean
- type: pearson_dot
value: 0.08121075567918108
name: Pearson Dot
- type: spearman_dot
value: 0.0753891417253212
name: Spearman Dot
- type: pearson_max
value: 0.30439718484055006
name: Pearson Max
- type: spearman_max
value: 0.3013780326567434
name: Spearman Max
- type: pearson_cosine
value: 0.2520910673470529
name: Pearson Cosine
- type: spearman_cosine
value: 0.2588662067006675
name: Spearman Cosine
- type: pearson_manhattan
value: 0.30439718484055006
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.3013780326567434
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.25977707672353506
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.26078444276128726
name: Spearman Euclidean
- type: pearson_dot
value: 0.08121075567918108
name: Pearson Dot
- type: spearman_dot
value: 0.0753891417253212
name: Spearman Dot
- type: pearson_max
value: 0.30439718484055006
name: Pearson Max
- type: spearman_max
value: 0.3013780326567434
name: Spearman Max
- type: pearson_cosine
value: 0.7933255500721913
name: Pearson Cosine
- type: spearman_cosine
value: 0.7974636940357042
name: Spearman Cosine
- type: pearson_manhattan
value: 0.7981019600081939
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.7881373354371464
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.7953389212549029
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.785471057378488
name: Spearman Euclidean
- type: pearson_dot
value: 0.7742724036105891
name: Pearson Dot
- type: spearman_dot
value: 0.7646982940473647
name: Spearman Dot
- type: pearson_max
value: 0.7981019600081939
name: Pearson Max
- type: spearman_max
value: 0.7974636940357042
name: Spearman Max
---
# SentenceTransformer based on microsoft/deberta-v3-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on the [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli), [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb), [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc), [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue), [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail), [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail), [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum), [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression), [sciq_pairs](https://huggingface.co/datasets/allenai/sciq), [qasc_pairs](https://huggingface.co/datasets/allenai/qasc), [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa), msmarco_pairs, [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions), [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa), [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) and [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) datasets. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) <!-- at revision a36c739020e01763fe789b4b85e2df55d6180012 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 tokens
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli)
- [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb)
- [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc)
- [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue)
- [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail)
- [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail)
- [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum)
- [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression)
- [sciq_pairs](https://huggingface.co/datasets/allenai/sciq)
- [qasc_pairs](https://huggingface.co/datasets/allenai/qasc)
- [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa)
- msmarco_pairs
- [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa)
- [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates)
- [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DebertaV2Model
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-checkpoints-tmp")
# Run inference
sentences = [
'What language did Tesla study while in school?',
'Tesla was the fourth of five children.',
'Rev. Jimmy Creech was defrocked after a highly publicized church trial in 1999 on account of his participation in same-sex union ceremonies.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.2521 |
| **spearman_cosine** | **0.2589** |
| pearson_manhattan | 0.3044 |
| spearman_manhattan | 0.3014 |
| pearson_euclidean | 0.2598 |
| spearman_euclidean | 0.2608 |
| pearson_dot | 0.0812 |
| spearman_dot | 0.0754 |
| pearson_max | 0.3044 |
| spearman_max | 0.3014 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.2521 |
| **spearman_cosine** | **0.2589** |
| pearson_manhattan | 0.3044 |
| spearman_manhattan | 0.3014 |
| pearson_euclidean | 0.2598 |
| spearman_euclidean | 0.2608 |
| pearson_dot | 0.0812 |
| spearman_dot | 0.0754 |
| pearson_max | 0.3044 |
| spearman_max | 0.3014 |
#### Semantic Similarity
* Dataset: `sts-test`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| pearson_cosine | 0.7933 |
| **spearman_cosine** | **0.7975** |
| pearson_manhattan | 0.7981 |
| spearman_manhattan | 0.7881 |
| pearson_euclidean | 0.7953 |
| spearman_euclidean | 0.7855 |
| pearson_dot | 0.7743 |
| spearman_dot | 0.7647 |
| pearson_max | 0.7981 |
| spearman_max | 0.7975 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### nli-pairs
* Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 16.62 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.46 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------------------------------|:-------------------------------------------------|
| <code>A person on a horse jumps over a broken down airplane.</code> | <code>A person is outdoors, on a horse.</code> |
| <code>Children smiling and waving at camera</code> | <code>There are children present</code> |
| <code>A boy is jumping on skateboard in the middle of a red bridge.</code> | <code>The boy does a skateboarding trick.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### sts-label
* Dataset: [sts-label](https://huggingface.co/datasets/sentence-transformers/stsb) at [ab7a5ac](https://huggingface.co/datasets/sentence-transformers/stsb/tree/ab7a5ac0e35aa22088bdcf23e7fd99b220e53308)
* Size: 5,749 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 9.81 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 9.74 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.54</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-----------------------------------------------------------|:----------------------------------------------------------------------|:------------------|
| <code>A plane is taking off.</code> | <code>An air plane is taking off.</code> | <code>1.0</code> |
| <code>A man is playing a large flute.</code> | <code>A man is playing a flute.</code> | <code>0.76</code> |
| <code>A man is spreading shreded cheese on a pizza.</code> | <code>A man is spreading shredded cheese on an uncooked pizza.</code> | <code>0.76</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
#### vitaminc-pairs
* Dataset: [vitaminc-pairs](https://huggingface.co/datasets/tals/vitaminc) at [be6febb](https://huggingface.co/datasets/tals/vitaminc/tree/be6febb761b0b2807687e61e0b5282e459df2fa0)
* Size: 24,996 training samples
* Columns: <code>label</code>, <code>sentence1</code>, and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | label | sentence1 | sentence2 |
|:--------|:-----------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | int | string | string |
| details | <ul><li>1: 100.00%</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.65 tokens</li><li>max: 50 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 36.9 tokens</li><li>max: 161 tokens</li></ul> |
* Samples:
| label | sentence1 | sentence2 |
|:---------------|:-----------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>1</code> | <code>Linkin Park sold more than 30 million singles and 130 million records worldwide .</code> | <code>Linkin Park has sold over 100 million albums and 31 million singles worldwide , making a total of over 131 million records sold worldwide with 32,000,000 albums and 33,000,000 singles sold in the US as of June 2017 .</code> |
| <code>1</code> | <code>Anibal Sanchez has played for the Atlanta Braves .</code> | <code>He has played in Major League Baseball ( MLB ) for the Florida/Miami Marlins , Detroit Tigers and Atlanta Braves .</code> |
| <code>1</code> | <code>Frankenweenie has under 37 reviews on Metacritic , and a score above 74 .</code> | <code>Metacritic , which assigns a weighted average score out of 100 to reviews from mainstream critics , gives the film a score of 75 based on 35 reviews .</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### qnli-contrastive
* Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c)
* Size: 50,000 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.54 tokens</li><li>max: 38 tokens</li></ul> | <ul><li>min: 9 tokens</li><li>mean: 35.96 tokens</li><li>max: 136 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>By what means did the British govern Tuvalu?</code> | <code>The Ellice Islands were administered as British protectorate by a Resident Commissioner from 1892 to 1916 as part of the British Western Pacific Territories (BWPT), and then as part of the Gilbert and Ellice Islands colony from 1916 to 1974.</code> | <code>0</code> |
| <code>Who is the current head of BBC Television?</code> | <code>As a division within the BBC, Television was formerly known as BBC Vision for a few years in the early 21st century, until its name reverted to Television in 2013.</code> | <code>0</code> |
| <code>What was the PLDA formerly known as?</code> | <code>The Professional Lighting Designers Association (PLDA), formerly known as ELDA is an organisation focusing on the promotion of the profession of Architectural Lighting Design.</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
#### scitail-pairs-qa
* Dataset: [scitail-pairs-qa](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 14,987 training samples
* Columns: <code>sentence2</code> and <code>sentence1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence2 | sentence1 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 15.63 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 14.73 tokens</li><li>max: 41 tokens</li></ul> |
* Samples:
| sentence2 | sentence1 |
|:----------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------|
| <code>People stopped adding lead to gasoline because of environmental pollution.</code> | <code>Why did people stop adding lead to gasoline?</code> |
| <code>The pleura that surrounds the lungs consists of two layers.</code> | <code>The pleura that surrounds the lungs consists of how many layers?</code> |
| <code>Thermal energy constitutes the total kinetic energy of all the atoms that make up an object.</code> | <code>What kind of energy constitutes the total kinetic energy of all the atoms that make up an object?</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### scitail-pairs-pos
* Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 8,600 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 24.02 tokens</li><li>max: 71 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 15.66 tokens</li><li>max: 39 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------|
| <code>TELEPHONE (818) 354-5011 PHOTO CAPTION P-23254 C & BW S-1-62 Dec. 4, 1980 Voyager 1 looked back at Saturn on Nov. 16, 1980, four days after the spacecraft flew past the planet, to observe the appearance of Saturn and its rings from this unique perspective.</code> | <code>The voyager 1 spacecraft visited saturn in 1980.</code> |
| <code>atoms may share one pair of electrons (single bonds), two pairs (double bonds), or three pairs (triple bonds).</code> | <code>In a carbon triple bond, three pairs of electrons are shared.</code> |
| <code>One gram of protein contains four calories.</code> | <code>One gram of proteins provides four calories of energy.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### xsum-pairs
* Dataset: [xsum-pairs](https://huggingface.co/datasets/sentence-transformers/xsum) at [788ddaf](https://huggingface.co/datasets/sentence-transformers/xsum/tree/788ddafe04e539956d56b567bc32a036ee7b9206)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 2 tokens</li><li>mean: 351.25 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 26.7 tokens</li><li>max: 59 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Bivsi Rana, 15, was born in Germany to Nepalese parents. In May she was deported with the rest of her family.<br>Her classmates protested and lobbied on her behalf against the deportation, drawing hundreds of people to rally under the slogan "Bring Bivsi back".<br>Officials called it a "unique case" and said Bivsi was "de facto German".<br>Mayor of Duisburg Sören Link said: "The fact that we have managed to resolve this difficult situation lifts a burden from my shoulders."<br>Bivsi's parents moved to Germany in 1998, fleeing civil conflict in their native Nepal, but their applications for asylum were denied. Their repeated appeals were rejected. Fearing political repercussions at home Bivsi's father, Mr Rana, initially applied for asylum under a false name and has since called this "the worst mistake" of his life.<br>But Bivsi herself was born and brought up in Germany.<br>On the last Monday in May, Bivsi was in class at school in Duisburg, in north west Germany, when she was told she had to pack her things and leave. That same day she and her family were deported to Nepal, a country Bivsi had never visited before.<br>Class teacher Sascha Thamm told German media afterwards that all the girls in the class cried and Bivsi's best friend broke down to the extent that an emergency doctor had to be called.<br>Mr Thamm said Bivsi was a kind, engaged student who was good at German and science and helped teach swimming lessons.<br>Bivsi has been living in Nepal with her family and, according to reports, has been unable to find a new school there due to language issues.<br>She has now been given a study visa enabling her to return Germany while she finishes her education. Her parents can return with her.<br>North-Rhine Westphalia state's integration minister Joachim Stamp said: "This is a unique case and generalisations cannot be drawn from it.<br>"The right of the child stood in the foreground in this decision.<br>"Bivsi was born in Germany and grew up here - she is de facto a German child."<br>Bivsi is reported to be "totally happy" with the decision, and her parents are reported to be "overjoyed".</code> | <code>A teenager who was removed from her classroom and deported to Nepal has been allowed to return to Germany on a study visa.</code> |
| <code>It was bought by an individual from the Dorset area in a phone bid.<br>A piece of paper found with the hair said "A single hair of Napoleon Bonaparte's head 29th August 1816" and "5th May 1821' - the date Napoleon died.<br>The strand of hair was attached to a piece of paper by red sealing wax.<br>Auctioneer Max Beaumont, of Cottees Auction House, Wareham, said it was found in a drawer by a colleague doing a home valuation.<br>He said they found a small goldsmith's box and expected to find a watch, but instead they found the folded paper.<br>The hair is understood to have been owned by the family for the whole of the 20th Century, but has not been professionally analysed.<br>The initial estimate was £100 to £200.<br>Mr Beaumont, who at 19 claims to be one of the youngest auctioneers in the country, said: "There has been a lot of interest."<br>Napoleon Bonaparte was a French emperor who conquered much of Europe. He was defeated in the Battle of Waterloo and imprisoned by the British on the remote Atlantic island of St Helena, where he died on 5 May 1821.</code> | <code>A strand of hair believed to be from the head of former French emperor Napoleon Bonaparte's head has sold for £130 at auction in Dorset.</code> |
| <code>Local Government Association figures show that councils will have spent £505m by 2017 on fighting obesity.<br>Councils use the money to measure children's weight at primary school, help people lose weight and offer free or cheaper leisure facilities.<br>Public health became the responsibility of local authorities in April 2013.<br>Before that, it was run by the NHS.<br>The Department of Health said it was committed to tackling obesity and the government had announced a sugar tax on soft drinks manufacturers earlier in the year.<br>The Local Government Association (LGA) receives money from the government to spend on public health, and this sum will fall from £3.38bn in 2016/17 to £3.13bn in 2020/21.<br>The association, which represents more than 370 councils - mostly in England and a few in Wales - said it was set to spend about half a billion pounds on obesity prevention in adults and children over four years.<br>This was made up as follows:<br>The LGA said the figures illustrated the amount of prevention work councils were carrying out and showed the scale of the obesity crisis.<br>The costs include running the government's National Child Measurement Programme, which involves calculating a child's BMI (body mass index) when they start primary school and again when they leave school in Year Six.<br>Recent figures showed that in 2014/15 in England, one in 10 children aged four and five was obese and one in five children aged 10 to 11 was obese.<br>The LGA said the overall cost of obesity was forecast to rise further.<br>It has previously called on the government to reduce sugar content in fizzy drinks, make sugar labelling clearer and provide more tap water in schools and restaurants.<br>Councils also want to have powers to ban junk food advertising near schools.<br>Izzi Seccombe, who is in charge of community wellbeing for the LGA, said councils were best placed to tackle obesity before it became a problem, but they needed more support.<br>"We would like assurances from the government's new administration that the long-awaited childhood obesity strategy is still on track and that it includes tough measures that will help to reverse the rise in costs and children becoming obese.<br>"Today's obese children will be tomorrow's obese adults, and with this comes a range of costly and debilitating major health conditions."</code> | <code>Local councils in England are warning that government cuts to public health funding could hamper their efforts to tackle obesity.</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### compression-pairs
* Dataset: [compression-pairs](https://huggingface.co/datasets/sentence-transformers/sentence-compression) at [605bc91](https://huggingface.co/datasets/sentence-transformers/sentence-compression/tree/605bc91d95631895ba25b6eda51a3cb596976c90)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 31.89 tokens</li><li>max: 125 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.21 tokens</li><li>max: 28 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------|
| <code>The USHL completed an expansion draft on Monday as 10 players who were on the rosters of USHL teams during the 2009-10 season were selected by the League's two newest entries, the Muskegon Lumberjacks and Dubuque Fighting Saints.</code> | <code>USHL completes expansion draft</code> |
| <code>Major League Baseball Commissioner Bud Selig will be speaking at St. Norbert College next month.</code> | <code>Bud Selig to speak at St. Norbert College</code> |
| <code>It's fresh cherry time in Michigan and the best time to enjoy this delicious and nutritious fruit.</code> | <code>It's cherry time</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### sciq_pairs
* Dataset: [sciq_pairs](https://huggingface.co/datasets/allenai/sciq) at [2c94ad3](https://huggingface.co/datasets/allenai/sciq/tree/2c94ad3e1aafab77146f384e23536f97a4849815)
* Size: 11,679 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 7 tokens</li><li>mean: 17.26 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 84.37 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What type of organism is commonly used in preparation of foods such as cheese and yogurt?</code> | <code>Mesophiles grow best in moderate temperature, typically between 25°C and 40°C (77°F and 104°F). Mesophiles are often found living in or on the bodies of humans or other animals. The optimal growth temperature of many pathogenic mesophiles is 37°C (98°F), the normal human body temperature. Mesophilic organisms have important uses in food preparation, including cheese, yogurt, beer and wine.</code> |
| <code>What phenomenon makes global winds blow northeast to southwest or the reverse in the northern hemisphere and northwest to southeast or the reverse in the southern hemisphere?</code> | <code>Without Coriolis Effect the global winds would blow north to south or south to north. But Coriolis makes them blow northeast to southwest or the reverse in the Northern Hemisphere. The winds blow northwest to southeast or the reverse in the southern hemisphere.</code> |
| <code>Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always what?</code> | <code>Summary Changes of state are examples of phase changes, or phase transitions. All phase changes are accompanied by changes in the energy of a system. Changes from a more-ordered state to a less-ordered state (such as a liquid to a gas) areendothermic. Changes from a less-ordered state to a more-ordered state (such as a liquid to a solid) are always exothermic. The conversion of a solid to a liquid is called fusion (or melting). The energy required to melt 1 mol of a substance is its enthalpy of fusion (ΔHfus). The energy change required to vaporize 1 mol of a substance is the enthalpy of vaporization (ΔHvap). The direct conversion of a solid to a gas is sublimation. The amount of energy needed to sublime 1 mol of a substance is its enthalpy of sublimation (ΔHsub) and is the sum of the enthalpies of fusion and vaporization. Plots of the temperature of a substance versus heat added or versus heating time at a constant rate of heating are calledheating curves. Heating curves relate temperature changes to phase transitions. A superheated liquid, a liquid at a temperature and pressure at which it should be a gas, is not stable. A cooling curve is not exactly the reverse of the heating curve because many liquids do not freeze at the expected temperature. Instead, they form a supercooled liquid, a metastable liquid phase that exists below the normal melting point. Supercooled liquids usually crystallize on standing, or adding a seed crystal of the same or another substance can induce crystallization.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### qasc_pairs
* Dataset: [qasc_pairs](https://huggingface.co/datasets/allenai/qasc) at [a34ba20](https://huggingface.co/datasets/allenai/qasc/tree/a34ba204eb9a33b919c10cc08f4f1c8dae5ec070)
* Size: 8,134 training samples
* Columns: <code>id</code>, <code>sentence1</code>, and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | id | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 17 tokens</li><li>mean: 21.35 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 11.47 tokens</li><li>max: 25 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 35.55 tokens</li><li>max: 66 tokens</li></ul> |
* Samples:
| id | sentence1 | sentence2 |
|:--------------------------------------------|:---------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>3E7TUJ2EGCLQNOV1WEAJ2NN9ROPD9K</code> | <code>What type of water formation is formed by clouds?</code> | <code>beads of water are formed by water vapor condensing. Clouds are made of water vapor.. Beads of water can be formed by clouds.</code> |
| <code>3LS2AMNW5FPNJK3C3PZLZCPX562OQO</code> | <code>Where do beads of water come from?</code> | <code>beads of water are formed by water vapor condensing. Condensation is the change of water vapor to a liquid.. Vapor turning into a liquid leaves behind beads of water</code> |
| <code>3TMFV4NEP8DPIPCI8H9VUFHJG8V8W3</code> | <code>What forms beads of water? </code> | <code>beads of water are formed by water vapor condensing. An example of water vapor is steam.. Steam forms beads of water.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### openbookqa_pairs
* Dataset: [openbookqa_pairs](https://huggingface.co/datasets/allenai/openbookqa) at [388097e](https://huggingface.co/datasets/allenai/openbookqa/tree/388097ea7776314e93a529163e0fea805b8a6454)
* Size: 2,740 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 3 tokens</li><li>mean: 13.83 tokens</li><li>max: 78 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 11.37 tokens</li><li>max: 30 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:-------------------------------------------------|:--------------------------------------------------------------------------|
| <code>The sun is responsible for</code> | <code>the sun is the source of energy for physical cycles on Earth</code> |
| <code>When food is reduced in the stomach</code> | <code>digestion is when stomach acid breaks down food</code> |
| <code>Stars are</code> | <code>a star is made of gases</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### msmarco_pairs
* Dataset: msmarco_pairs
* Size: 50,000 training samples
* Columns: <code>query</code>, <code>positive</code>, <code>negative</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative | label |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------|
| type | string | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 8.61 tokens</li><li>max: 27 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 75.09 tokens</li><li>max: 206 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 72.59 tokens</li><li>max: 216 tokens</li></ul> | <ul><li>min: -0.5</li><li>mean: 0.04</li><li>max: 0.6</li></ul> |
* Samples:
| query | positive | negative | label |
|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------|
| <code>what are the liberal arts?</code> | <code>liberal arts. 1. the academic course of instruction at a college intended to provide general knowledge and comprising the arts, humanities, natural sciences, and social sciences, as opposed to professional or technical subjects.</code> | <code>The New York State Education Department requires 60 Liberal Arts credits in a Bachelor of Science program and 90 Liberal Arts credits in a Bachelor of Arts program. In the list of course descriptions, courses which are liberal arts for all students are identified by (Liberal Arts) after the course number.</code> | <code>0.12154221534729004</code> |
| <code>what is the mechanism of action of fibrinolytic or thrombolytic drugs?</code> | <code>Baillière's Clinical Haematology. 6 Mechanism of action of the thrombolytic agents. 6 Mechanism of action of the thrombolytic agents JEFFREY I. WEITZ Fibrin formed during the haemostatic, inflammatory or tissue repair process serves a temporary role, and must be degraded to restore normal tissue function and structure.</code> | <code>Fibrinolytic drug. Fibrinolytic drug, also called thrombolytic drug, any agent that is capable of stimulating the dissolution of a blood clot (thrombus). Fibrinolytic drugs work by activating the so-called fibrinolytic pathway.</code> | <code>-0.05174225568771362</code> |
| <code>what is normal plat count</code> | <code>78 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).The average platelet count is 237,000 per mcL in men and 266,000 per mcL in women.8 Followers. A. Platelets are the tiny blood cells that help stop bleeding by binding together to form a clump or plug at sites of injury inside blood vessels. A normal platelet count is between 150,000 and 450,000 platelets per microliter (one-millionth of a liter, abbreviated mcL).</code> | <code>Your blood test results should be written in your maternity notes. Your platelet count will look something like Plat. 160x10.9/L, which means you have a platelet count of 160, which is in the normal range.If your platelet count is low, the blood test should be done again.This will keep track of whether or not your count is dropping.our platelet count will look something like Plat. 160x10.9/L, which means you have a platelet count of 160, which is in the normal range. If your platelet count is low, the blood test should be done again. This will keep track of whether or not your count is dropping.</code> | <code>-0.037523627281188965</code> |
* Loss: [<code>MarginMSELoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#marginmseloss)
#### nq_pairs
* Dataset: [nq_pairs](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.77 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 16 tokens</li><li>mean: 131.57 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:----------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>when did richmond last play in a preliminary final</code> | <code>Richmond Football Club Richmond began 2017 with 5 straight wins, a feat it had not achieved since 1995. A series of close losses hampered the Tigers throughout the middle of the season, including a 5-point loss to the Western Bulldogs, 2-point loss to Fremantle, and a 3-point loss to the Giants. Richmond ended the season strongly with convincing victories over Fremantle and St Kilda in the final two rounds, elevating the club to 3rd on the ladder. Richmond's first final of the season against the Cats at the MCG attracted a record qualifying final crowd of 95,028; the Tigers won by 51 points. Having advanced to the first preliminary finals for the first time since 2001, Richmond defeated Greater Western Sydney by 36 points in front of a crowd of 94,258 to progress to the Grand Final against Adelaide, their first Grand Final appearance since 1982. The attendance was 100,021, the largest crowd to a grand final since 1986. The Crows led at quarter time and led by as many as 13, but the Tigers took over the game as it progressed and scored seven straight goals at one point. They eventually would win by 48 points – 16.12 (108) to Adelaide's 8.12 (60) – to end their 37-year flag drought.[22] Dustin Martin also became the first player to win a Premiership medal, the Brownlow Medal and the Norm Smith Medal in the same season, while Damien Hardwick was named AFL Coaches Association Coach of the Year. Richmond's jump from 13th to premiers also marked the biggest jump from one AFL season to the next.</code> |
| <code>who sang what in the world's come over you</code> | <code>Jack Scott (singer) At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records.[1] He then recorded four Billboard Hot 100 hits – "What in the World's Come Over You" (#5), "Burning Bridges" (#3) b/w "Oh Little One" (#34), and "It Only Happened Yesterday" (#38).[1] "What in the World's Come Over You" was Scott's second gold disc winner.[6] Scott continued to record and perform during the 1960s and 1970s.[1] His song "You're Just Gettin' Better" reached the country charts in 1974.[1] In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.</code> |
| <code>who produces the most wool in the world</code> | <code>Wool Global wool production is about 2 million tonnes per year, of which 60% goes into apparel. Wool comprises ca 3% of the global textile market, but its value is higher owing to dying and other modifications of the material.[1] Australia is a leading producer of wool which is mostly from Merino sheep but has been eclipsed by China in terms of total weight.[30] New Zealand (2016) is the third-largest producer of wool, and the largest producer of crossbred wool. Breeds such as Lincoln, Romney, Drysdale, and Elliotdale produce coarser fibers, and wool from these sheep is usually used for making carpets.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### trivia_pairs
* Dataset: [trivia_pairs](https://huggingface.co/datasets/sentence-transformers/trivia-qa) at [a7c36e3](https://huggingface.co/datasets/sentence-transformers/trivia-qa/tree/a7c36e3c8c8c01526bc094d79bf80d4c848b0ad0)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 15.16 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 456.87 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Which American-born Sinclair won the Nobel Prize for Literature in 1930?</code> | <code>The Nobel Prize in Literature 1930 The Nobel Prize in Literature 1930 Sinclair Lewis The Nobel Prize in Literature 1930 Sinclair Lewis Prize share: 1/1 The Nobel Prize in Literature 1930 was awarded to Sinclair Lewis "for his vigorous and graphic art of description and his ability to create, with wit and humour, new types of characters". Photos: Copyright © The Nobel Foundation Share this: To cite this page MLA style: "The Nobel Prize in Literature 1930". Nobelprize.org. Nobel Media AB 2014. Web. 18 Jan 2017. <http://www.nobelprize.org/nobel_prizes/literature/laureates/1930/></code> |
| <code>Where in England was Dame Judi Dench born?</code> | <code>Judi Dench - IMDb IMDb Actress | Music Department | Soundtrack Judi Dench was born in York, England, to Eleanora Olive (Jones), who was from Dublin, Ireland, and Reginald Arthur Dench, a doctor from Dorset, England. She attended Mount School in York, and studied at the Central School of Speech and Drama. She has performed with Royal Shakespeare Company, the National Theatre, and at Old Vic Theatre. She is a ... See full bio » Born: a list of 35 people created 02 Jul 2011 a list of 35 people created 19 Apr 2012 a list of 35 people created 28 May 2014 a list of 25 people created 05 Aug 2014 a list of 26 people created 18 May 2015 Do you have a demo reel? Add it to your IMDbPage How much of Judi Dench's work have you seen? User Polls Won 1 Oscar. Another 59 wins & 163 nominations. See more awards » Known For 2016 The Hollow Crown (TV Series) Cecily, Duchess of York 2015 The Vote (TV Movie) Christine Metcalfe - Total War (1996) ... Narrator (voice) - Stalemate (1996) ... Narrator (voice) 1992 The Torch (TV Mini-Series) Aba 1990 Screen One (TV Series) Anne 1989 Behaving Badly (TV Mini-Series) Bridget 1981 BBC2 Playhouse (TV Series) Sister Scarli 1976 Arena (TV Series documentary) Sweetie Simpkins 1973 Ooh La La! (TV Series) Amélie 1966 Court Martial (TV Series) Marthe 1963 Z Cars (TV Series) Elena Collins 1963 Love Story (TV Series) Pat McKendrick 1960 The Terrible Choice (TV Series) Good Angel Music department (1 credit) A Fine Romance (TV Series) (theme sung by - 14 episodes, 1981 - 1983) (theme song sung by - 12 episodes, 1983 - 1984) - A Romantic Meal (1984) ... (theme song sung by) - Problems (1984) ... (theme song sung by) 2013 Fifty Years on Stage (TV Movie) (performer: "Send in the Clowns") 2009 Nine (performer: "Folies Bergère") - What's Wrong with Mrs Bale? (1997) ... (performer: "Raindrops Keep Fallin' On My Head" - uncredited) - Misunderstandings (1993) ... (performer: "Walkin' My Baby Back Home" - uncredited) 1982-1984 A Fine Romance (TV Series) (performer - 2 episodes) - The Telephone Call (1984) ... (performer: "Boogie Woogie Bugle Boy" - uncredited) - Furniture (1982) ... (performer: "Rule, Britannia!" - uncredited) Hide 2009 Waiting in Rhyme (Video short) (special thanks) 2007 Expresso (Short) (special thanks) 1999 Shakespeare in Love and on Film (TV Movie documentary) (thanks - as Dame Judi Dench) Hide 2016 Rio Olympics (TV Mini-Series) Herself 2015 In Conversation (TV Series documentary) Herself 2015 Entertainment Tonight (TV Series) Herself 2015 CBS This Morning (TV Series) Herself - Guest 2015 The Insider (TV Series) Herself 1999-2014 Cinema 3 (TV Series) Herself 2013 Good Day L.A. (TV Series) Herself - Guest 2013 Arena (TV Series documentary) Herself 2013 At the Movies (TV Series) Herself 2013 Shooting Bond (Video documentary) Herself 2013 Bond's Greatest Moments (TV Movie documentary) Herself 2012 Made in Hollywood (TV Series) Herself 1999-2012 Charlie Rose (TV Series) Herself - Guest 2008-2012 This Morning (TV Series) Herself - Guest 2012 The Secrets of Skyfall (TV Short documentary) Herself 2012 Anderson Live (TV Series) Herself 2012 J. Edgar: A Complicated Man (Video documentary short) Herself 2011 The Many Faces of... (TV Series documentary) Herself / Various Characters 2011 Na plovárne (TV Series) Herself 2010 BBC Proms (TV Series) Herself 2010 The South Bank Show Revisited (TV Series documentary) Herself - Episode #6.68 (2009) ... Herself - Guest (as Dame Judi Dench) 2007-2009 Breakfast (TV Series) 2009 Larry King Live (TV Series) Herself - Guest 2009 The One Show (TV Series) Herself 2009 Cranford in Detail (Video documentary short) Herself / Miss Matty Jenkins (as Dame Judi Dench) 2005-2008 The South Bank Show (TV Series documentary) Herself 2008 Tavis Smiley (TV Series) Herself - Guest 2007 ITV News (TV Series) Herself - BAFTA Nominee 2007 The Making of Cranford (Video documentary short) Herself / Miss Matty Jenkyns (as Dame Judi Dench) 2006 Becoming Bond (TV Movie documentary) Herself 2006 Corazón de... (TV Series) Hers</code> |
| <code>In which decade did Billboard magazine first publish and American hit chart?</code> | <code>The US Billboard song chart The US Billboard song chart Search this site with Google Song chart US Billboard The Billboard magazine has published various music charts starting (with sheet music) in 1894, the first "Music Hit Parade" was published in 1936 , the first "Music Popularity Chart" was calculated in 1940 . These charts became less irregular until the weekly "Hot 100" was started in 1958 . The current chart combines sales, airplay and downloads. A music collector that calls himself Bullfrog has been consolidating the complete chart from 1894 to the present day. he has published this information in a comprehenive spreadsheet (which can be obtained at bullfrogspond.com/ ). The Bullfrog data assigns each song a unique identifier, something like "1968_076" (which just happens to be the Bee Gees song "I've Gotta Get A Message To You"). This "Whitburn Number" is provided to match with the books of Joel Whitburn and consists of the year and a ranking within the year. A song that first entered the charts in December and has a long run is listed the following year. This numbering scheme means that songs which are still in the charts cannot be assigned a final id, because their ranking might change. So the definitive listing for a year cannot be final until about April. In our listing we only use songs with finalised IDs, this means that every year we have to wait until last year's entries are finalised before using them. (Source bullfrogspond.com/ , the original version used here was 20090808 with extra data from: the 2009 data from 20091219 the 2010 data from 20110305 the 2011 data from 20120929 the 2012 data from 20130330 the 2013 data from 20150328 The 20150328 data was the last one produced before the Billboard company forced the data to be withdrawn. As far as we know there are no more recent data sets available. This pattern of obtaining the data for a particular year in the middle of the following one comes from the way that the Bullfrog project generates the identifier for a song (what they call the "Prefix" in the spreadsheet). Recent entries are identified with keys like "2015-008" while older ones have keys like "2013_177". In the second case the underscore is significant, it indicates that this was the 177th biggest song released in 2013. Now, of course, during the year no one knows where a particular song will rank, so the underscore names can't be assigned until every song from a particular year has dropped out of the charts, so recent records are temporarily assigned a name with a dash. In about May of the following year the rankings are calculated and the final identifiers are assigned. That is why we at the Turret can only grab this data retrospectively. Attributes The original spreadsheet has a number of attributes, we have limited our attention to just a few of them: 134 9 The songs with the most entries on the chart were White Christmas (with 33 versions and a total of 110 weeks) and Stardust (with 19 and a total of 106 weeks). position The peak position that songs reached in the charts should show an smooth curve from number one down to the lowest position. This chart has more songs in the lower peak positions than one would expect. Before 1991 the profile of peak positions was exactly as you would expect, that year Billboard introduced the concept of "Recurrent" tracks, that is they removed any track from the chart which had spent more than twenty weeks in the chart and had fallen to the lower positions. weeks The effect of the "Recurrent" process, by which tracks are removed if they have spent at least twenty weeks in the chart and have fallen to the lower reaches, can clearly be seen in the strange spike in this attribute. This "adjustment" was intended to promote newer songs and ensure the chart does not become "stale". In fact since it was introduced in 1991 the length of long chart runs has increased, this might reflect the more conscious efforts of record companies to "game" the charts by controlling release times and promotions, or it coul</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### quora_pairs
* Dataset: [quora_pairs](https://huggingface.co/datasets/sentence-transformers/quora-duplicates) at [451a485](https://huggingface.co/datasets/sentence-transformers/quora-duplicates/tree/451a4850bd141edb44ade1b5828c259abd762cdb)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 13.53 tokens</li><li>max: 42 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 13.68 tokens</li><li>max: 43 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:----------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------|
| <code>Astrology: I am a Capricorn Sun Cap moon and cap rising...what does that say about me?</code> | <code>I'm a triple Capricorn (Sun, Moon and ascendant in Capricorn) What does this say about me?</code> |
| <code>How can I be a good geologist?</code> | <code>What should I do to be a great geologist?</code> |
| <code>How do I read and find my YouTube comments?</code> | <code>How can I see all my Youtube comments?</code> |
* Loss: [<code>MultipleNegativesSymmetricRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativessymmetricrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gooaq_pairs
* Dataset: [gooaq_pairs](https://huggingface.co/datasets/sentence-transformers/gooaq) at [b089f72](https://huggingface.co/datasets/sentence-transformers/gooaq/tree/b089f728748a068b7bc5234e5bcf5b25e3c8279c)
* Size: 50,000 training samples
* Columns: <code>sentence1</code> and <code>sentence2</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 |
|:--------|:---------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 11.6 tokens</li><li>max: 21 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 57.74 tokens</li><li>max: 127 tokens</li></ul> |
* Samples:
| sentence1 | sentence2 |
|:---------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>is toprol xl the same as metoprolol?</code> | <code>Metoprolol succinate is also known by the brand name Toprol XL. It is the extended-release form of metoprolol. Metoprolol succinate is approved to treat high blood pressure, chronic chest pain, and congestive heart failure.</code> |
| <code>are you experienced cd steve hoffman?</code> | <code>The Are You Experienced album was apparently mastered from the original stereo UK master tapes (according to Steve Hoffman - one of the very few who has heard both the master tapes and the CDs produced over the years). ... The CD booklets were a little sparse, but at least they stayed true to the album's original design.</code> |
| <code>how are babushka dolls made?</code> | <code>Matryoshka dolls are made of wood from lime, balsa, alder, aspen, and birch trees; lime is probably the most common wood type. ... After cutting, the trees are stripped of most of their bark, although a few inner rings of bark are left to bind the wood and keep it from splitting.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
### Evaluation Datasets
#### nli-pairs
* Dataset: [nli-pairs](https://huggingface.co/datasets/sentence-transformers/all-nli) at [d482672](https://huggingface.co/datasets/sentence-transformers/all-nli/tree/d482672c8e74ce18da116f430137434ba2e52fab)
* Size: 6,808 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 17.64 tokens</li><li>max: 63 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 9.67 tokens</li><li>max: 29 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------|
| <code>Two women are embracing while holding to go packages.</code> | <code>Two woman are holding packages.</code> |
| <code>Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.</code> | <code>Two kids in numbered jerseys wash their hands.</code> |
| <code>A man selling donuts to a customer during a world exhibition event held in the city of Angeles</code> | <code>A man selling donuts to a customer.</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### scitail-pairs-pos
* Dataset: [scitail-pairs-pos](https://huggingface.co/datasets/allenai/scitail) at [0cc4353](https://huggingface.co/datasets/allenai/scitail/tree/0cc4353235b289165dfde1c7c5d1be983f99ce44)
* Size: 1,304 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------|
| type | string | string | int |
| details | <ul><li>min: 5 tokens</li><li>mean: 22.52 tokens</li><li>max: 67 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 15.34 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>0: ~47.50%</li><li>1: ~52.50%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:----------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:---------------|
| <code>An introduction to atoms and elements, compounds, atomic structure and bonding, the molecule and chemical reactions.</code> | <code>Replace another in a molecule happens to atoms during a substitution reaction.</code> | <code>0</code> |
| <code>Wavelength The distance between two consecutive points on a sinusoidal wave that are in phase;</code> | <code>Wavelength is the distance between two corresponding points of adjacent waves called.</code> | <code>1</code> |
| <code>humans normally have 23 pairs of chromosomes.</code> | <code>Humans typically have 23 pairs pairs of chromosomes.</code> | <code>1</code> |
* Loss: [<code>GISTEmbedLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#gistembedloss) with these parameters:
```json
{'guide': SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
), 'temperature': 0.05}
```
#### qnli-contrastive
* Dataset: [qnli-contrastive](https://huggingface.co/datasets/nyu-mll/glue) at [bcdcba7](https://huggingface.co/datasets/nyu-mll/glue/tree/bcdcba79d07bc864c1c254ccfcedcce55bcc9a8c)
* Size: 5,463 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 6 tokens</li><li>mean: 14.13 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 36.58 tokens</li><li>max: 225 tokens</li></ul> | <ul><li>0: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>What came into force after the new constitution was herald?</code> | <code>As of that day, the new constitution heralding the Second Republic came into force.</code> | <code>0</code> |
| <code>What is the first major city in the stream of the Rhine?</code> | <code>The most important tributaries in this area are the Ill below of Strasbourg, the Neckar in Mannheim and the Main across from Mainz.</code> | <code>0</code> |
| <code>What is the minimum required if you want to teach in Canada?</code> | <code>In most provinces a second Bachelor's Degree such as a Bachelor of Education is required to become a qualified teacher.</code> | <code>0</code> |
* Loss: [<code>OnlineContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#onlinecontrastiveloss)
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 30
- `per_device_eval_batch_size`: 16
- `learning_rate`: 1e-05
- `weight_decay`: 5e-06
- `num_train_epochs`: 2
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.5
- `save_safetensors`: False
- `fp16`: True
- `push_to_hub`: True
- `hub_model_id`: bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-checkpoints-tmp
- `hub_strategy`: checkpoint
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 30
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `learning_rate`: 1e-05
- `weight_decay`: 5e-06
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.5
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: False
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: True
- `resume_from_checkpoint`: None
- `hub_model_id`: bobox/DeBERTaV3-small-GeneralSentenceTransformer-v2-checkpoints-tmp
- `hub_strategy`: checkpoint
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | scitail-pairs-pos loss | nli-pairs loss | qnli-contrastive loss | sts-test_spearman_cosine |
|:-----:|:-----:|:-------------:|:----------------------:|:--------------:|:---------------------:|:------------------------:|
| 0 | 0 | - | 3.4975 | 4.3370 | 4.4702 | 0.2589 |
| 0.1 | 1757 | 3.8346 | 2.3231 | 2.8535 | 3.0973 | - |
| 0.2 | 3514 | 1.8532 | 0.9755 | 1.3508 | 2.0603 | - |
| 0.3 | 5271 | 1.2185 | 0.7407 | 0.9381 | 1.2534 | - |
| 0.4 | 7028 | 0.9584 | 0.6616 | 0.7495 | 0.5140 | - |
| 0.5 | 8785 | 0.8157 | 0.6057 | 0.6550 | 0.3295 | - |
| 0.6 | 10542 | 0.6698 | 0.5821 | 0.5809 | 0.2423 | - |
| 0.7 | 12299 | 0.6497 | 0.5040 | 0.5178 | 0.2409 | - |
| 0.8 | 14056 | 0.5737 | 0.4942 | 0.5019 | 0.1500 | - |
| 0.9 | 15813 | 0.5896 | 0.4757 | 0.4804 | 0.1465 | - |
| 1.0 | 17570 | 0.5174 | 0.5253 | 0.4587 | 0.0534 | - |
| 1.1 | 19327 | 0.5059 | 0.5493 | 0.4587 | 0.0278 | - |
| 1.2 | 21084 | 0.4654 | 0.4850 | 0.4415 | 0.0517 | - |
| 1.3 | 22841 | 0.4224 | 0.4292 | 0.3957 | 0.0938 | - |
| 1.4 | 24598 | 0.4125 | 0.4624 | 0.3794 | 0.0839 | - |
| 1.5 | 26355 | 0.4072 | 0.4481 | 0.3878 | 0.0681 | - |
| 1.6 | 28112 | 0.3572 | 0.4953 | 0.3716 | 0.0674 | - |
| 1.7 | 29869 | 0.371 | 0.4767 | 0.3622 | 0.0600 | - |
| 1.8 | 31626 | 0.3332 | 0.4659 | 0.3600 | 0.0561 | - |
| 1.9 | 33383 | 0.3695 | 0.4604 | 0.3567 | 0.0614 | - |
| 2.0 | 35140 | 0.3315 | 0.4712 | 0.3597 | 0.0540 | 0.7975 |
### Framework Versions
- Python: 3.10.13
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2
- Accelerate: 0.30.1
- Datasets: 2.19.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### GISTEmbedLoss
```bibtex
@misc{solatorio2024gistembed,
title={GISTEmbed: Guided In-sample Selection of Training Negatives for Text Embedding Fine-tuning},
author={Aivin V. Solatorio},
year={2024},
eprint={2402.16829},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
#### MarginMSELoss
```bibtex
@misc{hofstätter2021improving,
title={Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation},
author={Sebastian Hofstätter and Sophia Althammer and Michael Schröder and Mete Sertkan and Allan Hanbury},
year={2021},
eprint={2010.02666},
archivePrefix={arXiv},
primaryClass={cs.IR}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | [
"MEDAL",
"SCIQ",
"SCITAIL"
] |
aisingapore/llama3.1-70b-cpt-sea-lionv3-base | aisingapore | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"zh",
"vi",
"id",
"th",
"fil",
"ta",
"ms",
"km",
"lo",
"my",
"arxiv:2309.06085",
"arxiv:2311.07911",
"arxiv:2403.06350",
"arxiv:2101.09635",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-70B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-12-11T10:22:14 | 2024-12-19T13:06:53 | 33 | 0 | ---
base_model: meta-llama/Llama-3.1-70B-Instruct
language:
- en
- zh
- vi
- id
- th
- fil
- ta
- ms
- km
- lo
- my
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
base_model_relation: finetune
---
<div>
<img src="llama_3.1_70b_sea-lion_v3_base_banner.png"/>
</div>
# Llama3.1 70B CPT SEA-LIONv3
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
Llama3.1 70B CPT SEA-LIONv3 Base is a multilingual model which has undergone continued pre-training on approximately **200B** tokens across 11 SEA languages: Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai and Vietnamese.
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
- **Developed by:** Products Pillar, AI Singapore
- **Funded by:** Singapore NRF
- **Model type:** Decoder
- **Languages supported:** Burmese, Chinese, English, Filipino, Indonesia, Khmer, Lao, Malay, Tamil, Thai, Vietnamese.
- **License:** [Llama 3.1 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/LICENSE)
## Model Details
### Model Description
We performed continued pre-training in English and SEA languages on [Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct), a decoder model using the Llama 3.1 architecture, to create Llama3.1 70B CPT SEA-LIONv3 Base.
For tokenisation, the model employs the default tokenizer used in Llama 3.1 70B Instruct.
### Benchmark Performance
We evaluated Llama3.1 70B CPT SEA-LIONv3 base model on general language capabilities and constraint-following behaviour.
#### General Language Capabilities and Constraint-following Behaviour
For the evaluation of general language capabilities, we employed the [SEA-HELM (also known as BHASA) evaluation benchmark](https://arxiv.org/abs/2309.06085v2) across a variety of tasks.
These tasks include Question Answering (QA), Sentiment Analysis (Sentiment), Toxicity Detection (Toxicity), Translation in both directions (Eng>Lang & Lang>Eng), Abstractive Summarisation (Abssum), Causal Reasoning (Causal) and Natural Language Inference (NLI).
Note: SEA-HELM is implemented using prompts to elicit answers in a strict format. For all tasks, the model is expected to provide an answer tag from which the answer is automatically extracted. For tasks where options are provided, the answer should comprise one of the pre-defined options. The scores for each task is normalised to account for baseline performance due to random chance.
The evaluation was done **five-shot** with native prompts on a sample of 100-1000 instances for each dataset.
Following the implementation of IFEval in OpenLLM leaderboard, we also implement SEA-IFEval to provide a comparison of the ability of the model to follow specific constraints in English and in SEA languages.
**SEA-IFEval**
Based on [IFEval](https://arxiv.org/abs/2311.07911), the linguists and native speakers in the team worked together to filter, localise and translate the datasets into the respective target languages to ensure that the examples remained reasonable, meaningful and natural.
SEA-IFEval evaluates a model's ability to adhere to constraints provided in the prompt, for example beginning a response with a specific word/phrase or answering with a certain number of sections. Additionally, accuracy is normalised by the proportion of responses in the correct language (if the model performs the task correctly but responds in the wrong language, it is judged to have failed the task).
For more details on Llama3.1 70B CPT SEA-LIONv3 base benchmark performance, please refer to the SEA-HELM leaderboard, https://leaderboard.sea-lion.ai/.
## Technical Specifications
### Infrastructure
Llama3.1 70B CPT SEA-LIONv3 was trained in two stages using [MosaicML Composer](https://github.com/mosaicml/composer) on the following hardware:
| Stage | Training Details | Llama3.1 70B CPT SEA-LIONv3 |
|------------|-----------------------|:---------------------------:|
|First Stage | AWS p5e.48xlarge | 8 instances |
| | Nvidia H200 140GB GPU | 64 |
| | Training Duration | 200 hrs (step 0 - 9000) |
|Second Stage| SingTel HGX-100 | 16 instances |
| | Nvidia H100 80GB GPU | 128 |
| | Training Duration | 495 hrs (step 9000 - 47684) |
### Configuration
| HyperParameter | Llama3.1 70B CPT SEA-LIONv3 |
|-------------------|:------------------------:|
| Precision | bfloat16 |
| Optimizer | decoupled_adamw |
| Scheduler | weight_stable_decay |
| Learning Rate | 1.0e-5 |
| Global Batch Size | 512 |
## Data
Llama3.1 70B CPT SEA-LIONv3 base model was continued pre-trained on 200B tokens of the following data:
| Language | Source | Total Tokens (B) | Percentage (%) | Total percentage (%) |
| ------------------------ | -------------------------------------- | ---------------- | -------------- | -------------------- |
| Code | StackV2 | 40 | 20 | 20 |
| English | Dolma | 37.5 | 18.75 | 25 |
| | Fineweb-Edu | 7.5 | 3.75 |
| | Others | 5 | 2.5 |
| Chinese | SEA-LION Pile v1 | 12 | 6 | 13 |
| | Others | 14 | 7 |
| Vietnamese | SEA-LION Pile v1 | 8.4 | 4.2 | 13 |
| | VinBigData | 16 | 8 |
| | Others | 1.6 | 0.8 |
| Indonesian | SEA-LION Pile v1 | 7 | 3.5 | 13 |
| | SEA-LION Pile v2 | 7 | 3.5 |
| | Others | 12 | 6 |
| Thai | SEA-LION Pile v1 | 10.7 | 5.35 | 10 |
| | WangChanBERTa | 8.5 | 4.25 |
| | Others | 0.8 | 0.4 |
| Filipino - Malay - Tamil | SEA-LION Pile v1, AI4Bharat Sangraha | 4.28 | 2.14 | 3 |
| | Others | 1.72 | 0.86 |
| Khmer - Lao - Burmese | SEA-LION Pile v1 | 5.2 | 2.6 | 3 |
| | Others | 0.8 | 0.4 |
Note:
- All token counts are counted using Llama 3.1 70B Instruct tokenizer
- SEA-LION Pile v1 is processed from Common Crawl WET, which is published [here](https://huggingface.co/datasets/aisingapore/sea-lion-pile). The cutoff date of this version is September 2020.
- SEA-LION Pile v2 is processed from Common Crawl WARC from October 2020 to April 2024.
- Tamil data from Sangraha is published [here](https://huggingface.co/datasets/ai4bharat/sangraha). The paper can be found [here](https://arxiv.org/abs/2403.06350).
- Tamil news is sourced with permission from [Seithi](https://seithi.mediacorp.sg/)
## Call for Contributions
We encourage researchers, developers, and language enthusiasts to actively contribute to the enhancement and expansion of SEA-LION. Contributions can involve identifying and reporting bugs, sharing pre-training, instruction, and preference data, improving documentation usability, proposing and implementing new model evaluation tasks and metrics, or training versions of the model in additional Southeast Asian languages. Join us in shaping the future of SEA-LION by sharing your expertise and insights to make these models more accessible, accurate, and versatile. Please check out our GitHub for further information on the call for contributions.
## The Team
Chan Adwin, Cheng Nicholas, Choa Esther, Huang Yuli, Hulagadri Adithya Venkatadri, Lau Wayne, Lee Chwan Ren, Leong Wai Yi, Leong Wei Qi, Limkonchotiwat Peerat, Liu Bing Jie Darius, Montalan Jann Railey, Ng Boon Cheong Raymond, Ngui Jian Gang, Nguyen Thanh Ngan, Ong Brandon, Ong Tat-Wee David, Ong Zhi Hao, Rengarajan Hamsawardhini, Siow Bryan, Susanto Yosephine, Tai Ngee Chia, Tan Choon Meng, Teng Walter, Teo Eng Sipp Leslie, Teo Wei Yi, Tjhi William, Yeo Yeow Tong, Yong Xianbin
## Acknowledgements
[AI Singapore](https://aisingapore.org/) is a national programme supported by the National Research Foundation, Singapore and hosted by the National University of Singapore. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the National Research Foundation or the National University of Singapore.
## Contact
For more info, please contact us using this [SEA-LION Inquiry Form.](https://forms.gle/sLCUVb95wmGf43hi6)
[Link to SEA-LION's GitHub repository.](https://github.com/aisingapore/sealion)
## Disclaimer
This is the repository for the commercial instruction-tuned model.
The model has _not_ been aligned for safety.
Developers and users should perform their own safety fine-tuning and related security measures.
In no event shall the authors be held liable for any claims, damages, or other liabilities arising from the use of the released weights and codes.
## References
### Thai Pre-Training Data Reference
```bibtex
@misc{lowphansirikul2021wangchanberta,
title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
year={2021},
eprint={2101.09635},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | [
"CHIA"
] |
pruas/BENT-PubMedBERT-NER-Organism | pruas | token-classification | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-14T12:17:32 | 2024-03-02T10:09:47 | 32 | 3 | ---
language:
- en
license: apache-2.0
pipeline_tag: token-classification
---
Named Entity Recognition (NER) model to recognize organism entities.
Please cite our work:
```
@article{NILNKER2022,
title = {NILINKER: Attention-based approach to NIL Entity Linking},
journal = {Journal of Biomedical Informatics},
volume = {132},
pages = {104137},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2022.104137},
url = {https://www.sciencedirect.com/science/article/pii/S1532046422001526},
author = {Pedro Ruas and Francisco M. Couto},
}
```
[PubMedBERT](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext) fine-tuned on the following datasets:
- [CellFinder](http://cellfinder.org/about/annotation/): entity type "species"
- [CRAFT](https://github.com/UCDenver-ccp/CRAFT/tree/master/concept-annotation): entity type "NCBITaxon"
- [MLEE](http://nactem.ac.uk/MLEE/):entity type "organism"
- [LINNAEUS](http://linnaeus.sourceforge.net/) (train and dev sets):
- [Species-800](https://species.jensenlab.org/)
- [BioNLP11ID](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP11ID-species-IOB): entity type "Organism"
- [BioNLP13CG](https://github.com/cambridgeltl/MTL-Bioinformatics-2016/tree/master/data/BioNLP13CG-species-IOB): entity types "Organism", "Organism subdivision"
- [miRNA-Test-Corpus](https://www.scai.fraunhofer.de/en/business-research-areas/bioinformatics/downloads/download-mirna-test-corpus.html): entity type "species"
- [Mantra](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4986661/pdf/ocv037.pdf):entity type "DISO" | [
"NAMED_ENTITY_RECOGNITION"
] | [
"CRAFT",
"CELLFINDER",
"LINNAEUS",
"MLEE",
"MIRNA"
] |
medspaner/xlm-roberta-large-spanish-trials-cases-7sgs-umls | medspaner | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-09-29T11:19:37 | 2024-10-01T06:33:30 | 32 | 0 | ---
license: cc-by-nc-4.0
metrics:
- precision
- recall
- f1
- accuracy
tags:
- generated_from_trainer
widget:
- text: "Criterios de inclusión: 18 a 65 años; necrosis avascular de cadera; sintomática\
\ de menos de 6 meses; capaz de otorgar consentimiento informado.\n Criterios\
\ de exclusión: embarazo, lactancia, mujer fértil sin métodos anticonceptivos\
\ adecuados; tratamiento activo con bifosfonatos; infección por VIH, hepatitis\
\ B o hepatitis C; historia de neoplasia en cualquier organo."
- text: 'Recuperación de daño hepático relacionado con nutrición parenteral con ácidos
omega-3 en adultos críticos: ensayo clínico aleatorizado.'
- text: 'Título público: Análisis del dolor tras inyección intramuscular de penicilina
con agujas de mayor calibre y anestésico local, frente a aguja tradicional sin
anestésico en pacientes con sífilis'
model-index:
- name: roberta-large-spanish-trials-cases-7sgs-umls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-spanish-trials-cases-7sgs-umls
This medical named entity recognition model detects 7 types of semantic groups from the [Unified Medical Language System (UMLS)](https://www.nlm.nih.gov/research/umls/index.html) ([Bodenreider 2004](https://academic.oup.com/nar/article/32/suppl_1/D267/2505235)):
- ANAT: body parts and anatomy (e.g. *garganta*, 'throat')
- CHEM: chemical entities and pharmacological substances (e.g. *aspirina*,'aspirin')
- DEVI: medical devices (e.g. *catéter*, 'catheter')
- DISO: pathologic conditions (e.g. *dolor*, 'pain')
- LIVB: living beings (e.g. *paciente*, 'patient')
- PHYS: physiological processes (e.g. *respiración*, 'breathing')
- PROC: diagnostic and therapeutic procedures, laboratory analyses and medical research activities (e.g. *cirugía*, 'surgery')
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.905 (±0.007)
- Recall: 0.916 (±0.004)
- F1: 0.910 (±0.005)
- Accuracy: 0.955 (±0.002)
## Model description
This model adapts the pre-trained model [xlm-roberta-large-spanish-clinical](https://huggingface.co/llange/xlm-roberta-large-spanish-clinical), presented in [Lange et al. (2022)](https://academic.oup.com/bioinformatics/article/38/12/3267/6575884).
It is fine-tuned to conduct medical named entity recognition on texts about in Spanish.
The model is fine-tuned on the [CT-EBM-ES corpus (Campillos-Llanos et al. 2021)](https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-021-01395-z) and 100 clinical cases with Creative Commons License.
If you use this model, please, cite as follows:
```
@article{campillosetal2024,
title = {{Hybrid tool for semantic annotation and concept extraction of medical texts in Spanish}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n},
journal = {BMC Bioinformatics},
year={2024},
publisher={BioMed Central}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
If you use the CT-EBM-ES resource, please, cite as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
To fine-tune the model, we also used 100 clinical cases with Creative Commons licences.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: Adam
- num_epochs: average 21.75 epochs (±6.34); trained with early stopping if no improvement after 5 epochs (early stopping patience: 5)
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.905 (±0.007) | 0.916 (±0.004) | 0.910 (±0.005) | 0.955 (±0.002) |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4
- Tokenizers 0.11.6
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"SCIELO"
] |
Tweeties/tweety-7b-tatar-v24a | Tweeties | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"tweety",
"tt",
"dataset:oscar-corpus/OSCAR-2301",
"arxiv:2408.04303",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-04-11T19:40:19 | 2024-08-09T08:58:41 | 32 | 11 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
datasets:
- oscar-corpus/OSCAR-2301
language:
- tt
license: apache-2.0
tags:
- tweety
---
<img align="right" src="https://huggingface.co/Tweeties/tweety-tatar-base-7b-2024-v1/resolve/main/TweetyTatar.png?download=true" alt="Tweety-Tatar-7B: A Tatar Large Language Model" width="20%">
# Tweety Tatar / Base 7b / 2024-v1
## Model description
This model is our trans-tokenized LLM for the [Tatar language](https://en.wikipedia.org/wiki/Tatar_language), converted from the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model trained by MistralAI.
Trans-tokenized LLMs are language models finetuned to produce output in a particular language, using a novel tokenizer native to that language.
- **Developed by:** [François Remy](https://huggingface.co/FremyCompany) (UGent), [Alfiya Khabibullina](https://huggingface.co/justalphie) (BeCode), [et al.](#citation)
- **Funded by:** IDLab / GPULab (UGent)
- **Model type:** Foundation model using the mistral architecture
- **Language(s) (NLP):** Tatar
- **License:** Apache 2.0
## In-scope usage
This model can be used as-is to perform basic language modeling operations in Tatar, or finetuned to perform more complex operations.
This model has not undergone Instruction- or Chat-based finetuning, which means that the model functions best in few-shot settings.
## Usage instructions
This model can be used just like any LLM in the HuggingFace framework:
```py
import transformers
MODEL_NAME = "Tweeties/tweety-tatar-base-7b-2024-v1"
generate = transformers.pipeline("text-generation", model=MODEL_NAME)
```
### Word Analogies
```py
ANALOGY_PROMPT = """Бу аналоглар таблицасын тутырыгыз:
* {x1} : {y1}
* {x2} :"""
def score_analogy(x1, y1, x2, y2):
Y2_PROMPT = ANALOGY_PROMPT.replace('{x1}', x1).replace('{y1}', y1).replace('{x2}', x2)
answer = generate(Y2_PROMPT, use_cache=True, do_sample=False, max_new_tokens=10, return_full_text=False, pad_token_id=generate.tokenizer.eos_token_id, eos_token_id=generate.tokenizer.convert_tokens_to_ids(['<0x0A>','</s>']))[0]['generated_text'].strip()
return 1 if answer == y2 else 0
score_analogy('Мәскәү', 'Русия', 'Әнкара', 'Төркия') # 1
```
### Summarization
```py
SUMMARIZE = "Түбәндәге текстка йомгак ясагыз:\n"
LONG_TEXT = "\n\nОзын текст:\n"
LONG_TEXT_DEMO = "Кеше организмы катлаулы организм, аның өчен кирәкле туклыклы матдәләрнең аерым баланс таләп итә. Кеше организмының туклану рационы нигездә пешекләнгән ризыклардан тора икән, аның организмы бу ысул белән туклануга җайлаша. Әмма, шул ук кеше кинәт чимал диетасына күчә икән, аның организмы әлеге үзгәрешне кабул итә алмый, бу мөмкин кадәр зыян китерергә мөмкин." # The human body is a complex organism that requires a specific balance of nutrients. If the human body's diet consists mainly of cooked foods, its body adapts to this type of nutrition. However, if the same person suddenly switches to a raw diet, his body cannot adapt to this change, which can be harmful. # The human body is a complex organism that requires a specific balance of nutrients to function optimally. When a person's diet consists primarily of cooked food, their body adapts to this way of eating. However, if that same person suddenly switches to a raw food diet, their body may not be able to handle the sudden change, leading to potential harm.
SHORT_TEXT = "\n\nКыска текст:\n"
SHORT_TEXT_DEMO = "Әмма пешкән ризык ашауга гына күнгән организмга кинәт чи ризык белән туклануга күчүнең зарарлы нәтиҗәсе дә булырга мөмкин." # However, a body accustomed to eating only cooked food can have harmful consequences when suddenly switching to eating raw food.
def generate_tatar_summary(tatar_text_to_summarize: str) -> str:
# craft the 1-shot example
input_ids = torch.concat([
tokenizer.encode(SUMMARIZE, return_tensors='pt'),
tokenizer.encode(LONG_TEXT, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode(LONG_TEXT_DEMO, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode(SHORT_TEXT, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode(SHORT_TEXT_DEMO, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode("\n\n", add_special_tokens=False, return_tensors='pt')
], axis=1)
# craft the input
input_ids = torch.concat([
input_ids,
tokenizer.encode(SUMMARIZE, return_tensors='pt'),
tokenizer.encode(LONG_TEXT, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode(tatar_text_to_summarize, add_special_tokens=False, return_tensors='pt'),
tokenizer.encode(SHORT_TEXT, add_special_tokens=False, return_tensors='pt'),
], axis=1)
# generate the output
model_inputs = {'input_ids':input_ids.to(cuda_device)}
model_outputs = model.generate(
**model_inputs,
max_new_tokens=80,
num_beams=8,
no_repeat_ngram_size=6,
early_stopping=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids(['<0x0A>','</s>']),
)
# decode the output
return (tokenizer.decode(model_outputs[0][input_ids.shape[1]:])).rstrip()
generate_tatar_summary("Зур шартлау (ингл. Big Bang) – Галәмнең башлангыч, сингуляр халәттә торган чорын тасвирлаучы космологик модель. Әле ХХ гасырда да без яшәгән Галәм статик структуралы, дигән фикер яшәгән. Ягъни, Галәмнең башы һәм ахыры юк, имеш, ул һәрвакыт булган һәм булачак. Бу фикер фән дөньясында бик озак, астрономия фәненең бөтен нигезләрен җимереп яңа теория барлыкка килгәнче яшәгән. Бу теориянең исеме – «Зур шартлау» теориясе.")
```
## Citation
If you use this model, please cite our work as:
```
@article{tweeties2024,
title = {Trans-Tokenization and Cross-lingual Vocabulary Transfers: Language Adaptation of LLMs for Low-Resource NLP},
author = {François Remy and Pieter Delobelle and Hayastan Avetisyan and Alfiya Khabibullina and Miryam de Lhoneux and Thomas Demeester},
url = {https://arxiv.org/abs/2408.04303},
year = {2024},
note = {Accepted at COLM 2024}
}
``` | [
"SUMMARIZATION"
] | [
"CRAFT"
] |
Huzaifa367/chat-summarizer | Huzaifa367 | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"dataset:samsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-12T15:03:57 | 2024-05-12T15:16:52 | 32 | 1 | ---
datasets:
- samsum
pipeline_tag: summarization
widget:
- text: "Laurie: So, what are your plans for this weekend?\nChristie: I don’t know.\
\ Do you want to get together or something?\nSarah: How about going to see a movie?\
\ Cinemax 26 on Carson Boulevard is showing Enchanted. Laurie: That sounds like\
\ a good idea. Maybe we should go out to eat beforehand.\nSarah: It is fine with\
\ me. Where do you want to meet?\nChristie: Let’s meet at Summer Pizza House.\
\ I have not gone there for a long time.\nLaurie: Good idea again. I heard they\
\ just came up with a new pizza. It should be good because Summer Pizza House\
\ always has the best pizza in town.\nSarah: When should we meet?\nChristie: Well,\
\ the movie is shown at 2:00PM, 4:00PM, 6:00PM and 8:00PM.\nLaurie: Why don’t\
\ we go to the 2:00PM show? We can meet at Summer Pizza House at noon. That will\
\ give us plenty of time to enjoy our pizza.\nSarah: My cousin Karen is in town.\
\ Can I bring her along? I hate to leave her home alone.\nChristie: Karen is in\
\ town? Yes, bring her along. Laurie, you remember Karen? We met her at Sara’s\
\ high school graduation party two years ago.\nLaurie: I do not quite remember\
\ her. What does she look like?\nSarah: She has blond hair, she is kind of slender,\
\ and she is about your height.\nLaurie: She wears eyeglasses, right?\nSarah:\
\ Yes, and she was playing the piano off and on during the party.\nLaurie: I remember\
\ her now. Yes, do bring her along Sara. She is such a nice person, and funny\
\ too.\nSarah: She will be happy to meet both of you again.\nChristie: What is\
\ she doing these days?\nSarah: She graduated last June, and she will start her\
\ teaching career next week when the new school term begins.\nLaurie: What grade\
\ is she going to teach?\nSarah: She will teach kindergarten. She loves working\
\ with kids, and she always has such a good rapport with them\nChristie: Kindergarten?\
\ She must be a very patient person. I always think kindergarten is the most difficult\
\ class to teach. Most of the kids have never been to school, and they have e\
\ never been away from mommy for long.\nSarah: I think Karen will do fine. She\
\ knows how to handle young children\nLaurie: I think the first few weeks will\
\ be tough. However, once the routine is set, it should not be too difficult to\
\ teach kindergarten.\nChristie: You are right. The kids might even look forward\
\ to going to school since they have so many friends to play with.\nSarah: There\
\ are so many new things for them to do at school too. They do a lot of crafts\
\ in kindergarten. I am always amazed by the things kindergarten teachers do.\
\ \nLaurie: Yes, I have seen my niece come home with so many neat stuff.\nChristie:\
\ Maybe we can ask Karen to show us some of the things that we can do for this\
\ Halloween.\nLaurie: Maybe we can stop by the craft store after the movie. What\
\ do you think, Sara?\nSarah: I will talk to her. I think she will like that.\
\ It will help her with school projects when Halloween comes.\nChristie: Michael’s\
\ is a good store for crafts. It always carries a variety of things, and you can\
\ find almost anything there.\nLaurie: There is a Michaels store not far away\
\ from Cinemax 26. I believe it is just around the corner, on Pioneer Avenue.\
\ We can even walk over there.\nSarah: So, we plan to meet for pizza at noon,\
\ go to the movies at two, and shop at Michael’s afterward. Right?\nLaurie and\
\ Christie: Yes. \n"
model-index:
- name: bart-large-cnn-samsum
results:
- task:
type: summarization
name: Conversation Summarization
dataset:
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
type: samsum
metrics:
- type: rogue-1
value: 54.8764
name: Validation ROGUE-1
- type: rogue-2
value: 29.6869,
name: Validation ROGUE-2
- type: rogue-l
value: 44.9874
name: Validation ROGUE-L
- type: loss
value: 1.47812
name: loss
---
| [
"SUMMARIZATION"
] | [
"CRAFT"
] |
grimjim/llama-3-aaditya-OpenBioLLM-8B | grimjim | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"llama-3",
"Mixtral",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"distillation",
"en",
"arxiv:2305.18290",
"arxiv:2303.13375",
"arxiv:2212.13138",
"arxiv:2305.09617",
"arxiv:2402.07023",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-05-29T03:26:48 | 2024-05-29T11:37:35 | 32 | 0 | ---
base_model: meta-llama/Meta-Llama-3-8B
language:
- en
license: llama3
tags:
- llama-3
- llama
- Mixtral
- instruct
- finetune
- chatml
- DPO
- RLHF
- gpt4
- distillation
widget:
- example_title: OpenBioLLM-8B
messages:
- role: system
content: You are an expert and experienced from the healthcare and biomedical
domain with extensive medical knowledge and practical experience.
- role: user
content: How long does it take for newborn jaundice to go away?
output:
text: 'Newborn jaundice, also known as neonatal jaundice, is a common condition
in newborns where the yellowing of the skin and eyes occurs due to an elevated
level of bilirubin in the blood. Bilirubin is a yellow pigment that forms when
red blood cells break down. In most cases, newborn jaundice resolves on its
own without any specific treatment.
The duration of newborn jaundice can vary depending on several factors such
as the underlying cause, gestational age at birth, and individual variations
in bilirubin metabolism. Here are some general guidelines:
1. Physiological jaundice: This is the most common type of newborn jaundice
and usually appears within 24-72 hours after birth. It tends to peak between
the second and fifth day of life and gradually improves over the next week or
two. By the time the baby is one week old, the jaundice should have mostly resolved.
2. Breast milk jaundice: This type of jaundice occurs in breastfed babies and
may appear later than physiological jaundice, typically between the fifth and
fourteenth day of life. It tends to persist for a longer duration but usually
resolves within six weeks after birth. 3. Pathological jaundice: This type of
jaundice is less common and occurs due to an underlying medical condition that
affects bilirubin metabolism or liver function. The duration of pathological
jaundice depends on the specific cause and may require treatment.
It''s important for parents to monitor their newborn''s jaundice closely and
seek medical advice if the jaundice progresses rapidly, becomes severe, or is
accompanied by other symptoms such as poor feeding, lethargy, or excessive sleepiness.
In these cases, further evaluation and management may be necessary. Remember
that each baby is unique, and the timing of jaundice resolution can vary. If
you have concerns about your newborn''s jaundice, it''s always best to consult
with a healthcare professional for personalized advice and guidance.'
model-index:
- name: OpenBioLLM-8B
results: []
---
<div align="center">
<img width="260px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/BrQCb95lmEIFz79QAmoNA.png"></div>

<div align="center">
<h1>Advancing Open-source Large Language Models in Medical Domain</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://colab.research.google.com/drive/1F5oV20InEYeAJGmBwYF9NM_QhLmjBkKJ?usp=sharing">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/openlifescience-ai">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="#">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/A5Fjf5zC69">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>

Introducing OpenBioLLM-8B: A State-of-the-Art Open Source Biomedical Large Language Model
OpenBioLLM-8B is an advanced open source language model designed specifically for the biomedical domain. Developed by Saama AI Labs, this model leverages cutting-edge techniques to achieve state-of-the-art performance on a wide range of biomedical tasks.
🏥 **Biomedical Specialization**: OpenBioLLM-8B is tailored for the unique language and knowledge requirements of the medical and life sciences fields. It was fine-tuned on a vast corpus of high-quality biomedical data, enabling it to understand and generate text with domain-specific accuracy and fluency.
🎓 **Superior Performance**: With 8 billion parameters, OpenBioLLM-8B outperforms other open source biomedical language models of similar scale. It has also demonstrated better results compared to larger proprietary & open-source models like GPT-3.5 and Meditron-70B on biomedical benchmarks.
🧠 **Advanced Training Techniques**: OpenBioLLM-8B builds upon the powerful foundations of the **Meta-Llama-3-8B** and [Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B) models. It incorporates the DPO dataset and fine-tuning recipe along with a custom diverse medical instruction dataset. Key components of the training pipeline include:
<div align="center">
<img width="1200px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/oPchsJsEpQoGcGXVbh7YS.png">
</div>
- **Policy Optimization**: [Direct Preference Optimization: Your Language Model is Secretly a Reward Model (DPO)](https://arxiv.org/abs/2305.18290)
- **Ranking Dataset**: [berkeley-nest/Nectar](https://huggingface.co/datasets/berkeley-nest/Nectar)
- **Fine-tuning dataset**: Custom Medical Instruct dataset (We plan to release a sample training dataset in our upcoming paper; please stay updated)
This combination of cutting-edge techniques enables OpenBioLLM-8B to align with key capabilities and preferences for biomedical applications.
⚙️ **Release Details**:
- **Model Size**: 8 billion parameters
- **Quantization**: Optimized quantized versions available [Here](https://huggingface.co/aaditya/OpenBioLLM-Llama3-8B-GGUF)
- **Language(s) (NLP):** en
- **Developed By**: [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) from Saama AI Labs
- **License:** Meta-Llama License
- **Fine-tuned from models:** [meta-llama/Meta-Llama-3-8B](meta-llama/Meta-Llama-3-8B)
- **Resources for more information:**
- Paper: Coming soon
The model can be fine-tuned for more specialized tasks and datasets as needed.
OpenBioLLM-8B represents an important step forward in democratizing advanced language AI for the biomedical community. By leveraging state-of-the-art architectures and training techniques from leading open source efforts like Llama-3, we have created a powerful tool to accelerate innovation and discovery in healthcare and the life sciences.
We are excited to share OpenBioLLM-8B with researchers and developers around the world.
### Use with transformers
**Important: Please use the exact chat template provided by Llama-3 instruct version. Otherwise there will be a degradation in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
See the snippet below for usage with Transformers:
```python
import transformers
import torch
model_id = "aaditya/OpenBioLLM-Llama3-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device="auto",
)
messages = [
{"role": "system", "content": "You are an expert and experienced from the healthcare and biomedical domain with extensive medical knowledge and practical experience. Your name is OpenBioLLM, and you were developed by Saama AI Labs. who's willing to help answer the user's query with explanation. In your explanation, leverage your deep medical expertise such as relevant anatomical structures, physiological processes, diagnostic criteria, treatment guidelines, or other pertinent medical concepts. Use precise medical terminology while still aiming to make the explanation clear and accessible to a general audience."},
{"role": "user", "content": "How can i split a 3mg or 4mg waefin pill so i can get a 2.5mg pill?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.0,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
```
## **Training procedure**
### **Training hyperparameters**
<details>
<summary>Click to see details</summary>
- learning_rate: 0.0002
- lr_scheduler: cosine
- train_batch_size: 12
- eval_batch_size: 8
- GPU: H100 80GB SXM5
- num_devices: 1
- optimizer: adamw_bnb_8bit
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
</details>
### **Peft hyperparameters**
<details>
<summary>Click to see details</summary>
- adapter: qlora
- lora_r: 128
- lora_alpha: 256
- lora_dropout: 0.05
- lora_target_linear: true
-lora_target_modules:
- q_proj
- v_proj
- k_proj
- o_proj
- gate_proj
- down_proj
- up_proj
</details>
### **Training results**
### **Framework versions**
- Transformers 4.39.3
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.1
- Axolotl
- Lm harness for evaluation
# Benchmark Results
🔥 OpenBioLLM-8B demonstrates superior performance compared to larger models, such as GPT-3.5, Meditron-70B across 9 diverse biomedical datasets, achieving state-of-the-art results with an average score of 72.50%, despite having a significantly smaller parameter count. The model's strong performance in domain-specific tasks, such as Clinical KG, Medical Genetics, and PubMedQA, highlights its ability to effectively capture and apply biomedical knowledge.
🚨 The GPT-4, Med-PaLM-1, and Med-PaLM-2 results are taken from their official papers. Since Med-PaLM doesn't provide zero-shot accuracy, we are using 5-shot accuracy from their paper for comparison. All results presented are in the zero-shot setting, except for Med-PaLM-2 and Med-PaLM-1, which use 5-shot accuracy.
| | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA 4 opts | PubMedQA | MedMCQA | Avg |
|--------------------|-------------|------------------|---------|--------------|-----------------|------------------|--------------|----------|---------|-------|
| **OpenBioLLM-70B** | **92.93** | **93.197** | **83.904** | 93.75 | 93.827 | **85.749** | 78.162 | 78.97 | **74.014** | **86.05588** |
| Med-PaLM-2 (5-shot) | 88.3 | 90 | 77.8 | **95.2** | 94.4 | 80.9 | **79.7** | **79.2** | 71.3 | 84.08 |
| **GPT-4** | 86.04 | 91 | 80 | 93.01 | **95.14** | 76.88 | 78.87 | 75.2 | 69.52 | 82.85 |
| Med-PaLM-1 (Flan-PaLM, 5-shot) | 80.4 | 75 | 63.7 | 83.8 | 88.9 | 76.3 | 67.6 | 79 | 57.6 | 74.7 |
| **OpenBioLLM-8B** | 76.101 | 86.1 | 69.829 | 78.21 | 84.213 | 68.042 | 58.993 | 74.12 | 56.913 | 72.502 |
| Gemini-1.0 | 76.7 | 75.8 | 66.7 | 77.7 | 88 | 69.2 | 58 | 70.7 | 54.3 | 70.79 |
| GPT-3.5 Turbo 1106 | 74.71 | 74 | 72.79 | 72.79 | 72.91 | 64.73 | 57.71 | 72.66 | 53.79 | 66 |
| Meditron-70B | 66.79 | 69 | 53.33 | 71.69 | 76.38 | 63 | 57.1 | 76.6 | 46.85 | 64.52 |
| gemma-7b | 69.81 | 70 | 59.26 | 66.18 | 79.86 | 60.12 | 47.21 | 76.2 | 48.96 | 64.18 |
| Mistral-7B-v0.1 | 68.68 | 71 | 55.56 | 68.38 | 68.06 | 59.54 | 50.82 | 75.4 | 48.2 | 62.85 |
| Apollo-7B | 62.26 | 72 | 61.48 | 69.12 | 70.83 | 55.49 | 55.22 | 39.8 | 53.77 | 60 |
| MedAlpaca-7b | 57.36 | 69 | 57.04 | 67.28 | 65.28 | 54.34 | 41.71 | 72.8 | 37.51 | 58.03 |
| BioMistral-7B | 59.9 | 64 | 56.5 | 60.4 | 59 | 54.7 | 50.6 | 77.5 | 48.1 | 57.3 |
| AlpaCare-llama2-7b | 49.81 | 49 | 45.92 | 33.82 | 50 | 43.35 | 29.77 | 72.2 | 34.42 | 45.36 |
| ClinicalGPT | 30.56 | 27 | 30.37 | 19.48 | 25 | 24.27 | 26.08 | 63.8 | 28.18 | 30.52 |
<div align="center">
<img width="1600px" src="https://cdn-uploads.huggingface.co/production/uploads/5f3fe13d79c1ba4c353d0c19/_SzdcJSBjZyo8RS1bTEkP.png">
</div>
## Detailed Medical Subjectwise accuracy

# Use Cases & Examples
🚨 **Below results are from the quantized version of OpenBioLLM-70B**
# Summarize Clinical Notes
OpenBioLLM-70B can efficiently analyze and summarize complex clinical notes, EHR data, and discharge summaries, extracting key information and generating concise, structured summaries

# Answer Medical Questions
OpenBioLLM-70B can provide answers to a wide range of medical questions.


<details>
<summary>Click to see details</summary>



</details>
# Clinical Entity Recognition
OpenBioLLM-70B can perform advanced clinical entity recognition by identifying and extracting key medical concepts, such as diseases, symptoms, medications, procedures, and anatomical structures, from unstructured clinical text. By leveraging its deep understanding of medical terminology and context, the model can accurately annotate and categorize clinical entities, enabling more efficient information retrieval, data analysis, and knowledge discovery from electronic health records, research articles, and other biomedical text sources. This capability can support various downstream applications, such as clinical decision support, pharmacovigilance, and medical research.



# Biomarkers Extraction

# Classification
OpenBioLLM-70B can perform various biomedical classification tasks, such as disease prediction, sentiment analysis, medical document categorization

# De-Identification
OpenBioLLM-70B can detect and remove personally identifiable information (PII) from medical records, ensuring patient privacy and compliance with data protection regulations like HIPAA.

**Advisory Notice!**
While OpenBioLLM-70B & 8B leverages high-quality data sources, its outputs may still contain inaccuracies, biases, or misalignments that could pose risks if relied upon for medical decision-making without further testing and refinement. The model's performance has not yet been rigorously evaluated in randomized controlled trials or real-world healthcare environments.
Therefore, we strongly advise against using OpenBioLLM-70B & 8B for any direct patient care, clinical decision support, or other professional medical purposes at this time. Its use should be limited to research, development, and exploratory applications by qualified individuals who understand its limitations.
OpenBioLLM-70B & 8B are intended solely as a research tool to assist healthcare professionals and should never be considered a replacement for the professional judgment and expertise of a qualified medical doctor.
Appropriately adapting and validating OpenBioLLM-70B & 8B for specific medical use cases would require significant additional work, potentially including:
- Thorough testing and evaluation in relevant clinical scenarios
- Alignment with evidence-based guidelines and best practices
- Mitigation of potential biases and failure modes
- Integration with human oversight and interpretation
- Compliance with regulatory and ethical standards
Always consult a qualified healthcare provider for personal medical needs.
# Citation
If you find OpenBioLLM-70B & 8B useful in your work, please cite the model as follows:
```
@misc{OpenBioLLMs,
author = {Ankit Pal, Malaikannan Sankarasubbu},
title = {OpenBioLLMs: Advancing Open-Source Large Language Models for Healthcare and Life Sciences},
year = {2024},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/aaditya/OpenBioLLM-Llama3-70B}}
}
```
The accompanying paper is currently in progress and will be released soon.
<div align="center">
<h2> 💌 Contact </h2>
</div>
We look forward to hearing you and collaborating on this exciting project!
**Contributors:**
- [Ankit Pal (Aaditya Ura)](https://aadityaura.github.io/) [aadityaura at gmail dot com]
- Saama AI Labs
- Note: I am looking for a funded PhD opportunity, especially if it fits my Responsible Generative AI, Multimodal LLMs, Geometric Deep Learning, and Healthcare AI skillset.
# References
We thank the [Meta Team](meta-llama/Meta-Llama-3-70B-Instruct) for their amazing models!
Result sources
- [1] GPT-4 [Capabilities of GPT-4 on Medical Challenge Problems] (https://arxiv.org/abs/2303.13375)
- [2] Med-PaLM-1 [Large Language Models Encode Clinical Knowledge](https://arxiv.org/abs/2212.13138)
- [3] Med-PaLM-2 [Towards Expert-Level Medical Question Answering with Large Language Models](https://arxiv.org/abs/2305.09617)
- [4] Gemini-1.0 [Gemini Goes to Med School](https://arxiv.org/abs/2402.07023) | [
"QUESTION_ANSWERING"
] | [
"MEDQA",
"PUBMEDQA"
] |
RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-07-22T14:24:16 | 2024-07-22T15:37:29 | 32 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
vi-gemma-2b-RAG - GGUF
- Model creator: https://huggingface.co/himmeow/
- Original model: https://huggingface.co/himmeow/vi-gemma-2b-RAG/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [vi-gemma-2b-RAG.Q2_K.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q2_K.gguf) | Q2_K | 1.08GB |
| [vi-gemma-2b-RAG.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_XS.gguf) | IQ3_XS | 1.16GB |
| [vi-gemma-2b-RAG.IQ3_S.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_S.gguf) | IQ3_S | 1.2GB |
| [vi-gemma-2b-RAG.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_S.gguf) | Q3_K_S | 1.2GB |
| [vi-gemma-2b-RAG.IQ3_M.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ3_M.gguf) | IQ3_M | 1.22GB |
| [vi-gemma-2b-RAG.Q3_K.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K.gguf) | Q3_K | 1.29GB |
| [vi-gemma-2b-RAG.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_M.gguf) | Q3_K_M | 1.29GB |
| [vi-gemma-2b-RAG.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q3_K_L.gguf) | Q3_K_L | 1.36GB |
| [vi-gemma-2b-RAG.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ4_XS.gguf) | IQ4_XS | 1.4GB |
| [vi-gemma-2b-RAG.Q4_0.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_0.gguf) | Q4_0 | 1.44GB |
| [vi-gemma-2b-RAG.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.IQ4_NL.gguf) | IQ4_NL | 1.45GB |
| [vi-gemma-2b-RAG.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K_S.gguf) | Q4_K_S | 1.45GB |
| [vi-gemma-2b-RAG.Q4_K.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K.gguf) | Q4_K | 1.52GB |
| [vi-gemma-2b-RAG.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_K_M.gguf) | Q4_K_M | 1.52GB |
| [vi-gemma-2b-RAG.Q4_1.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q4_1.gguf) | Q4_1 | 1.56GB |
| [vi-gemma-2b-RAG.Q5_0.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_0.gguf) | Q5_0 | 1.68GB |
| [vi-gemma-2b-RAG.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K_S.gguf) | Q5_K_S | 1.68GB |
| [vi-gemma-2b-RAG.Q5_K.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K.gguf) | Q5_K | 1.71GB |
| [vi-gemma-2b-RAG.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_K_M.gguf) | Q5_K_M | 1.71GB |
| [vi-gemma-2b-RAG.Q5_1.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q5_1.gguf) | Q5_1 | 1.79GB |
| [vi-gemma-2b-RAG.Q6_K.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q6_K.gguf) | Q6_K | 1.92GB |
| [vi-gemma-2b-RAG.Q8_0.gguf](https://huggingface.co/RichardErkhov/himmeow_-_vi-gemma-2b-RAG-gguf/blob/main/vi-gemma-2b-RAG.Q8_0.gguf) | Q8_0 | 2.49GB |
Original model description:
---
base_model: unsloth/gemma-1.1-2b-it-bnb-4bit
language:
- en
- vi
license: apache-2.0
tags:
- text-generation-inference
- retrieval-augmented-generation
- transformers
- unsloth
- gemma
- trl
- sft
---
## Model Card: vi-gemma-2b-RAG
### Tiếng Việt (Vietnamese)
**Mô tả mô hình:**
vi-gemma-2b-RAG là một mô hình ngôn ngữ lớn được tinh chỉnh từ mô hình cơ sở [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) sử dụng kỹ thuật LoRA. Mô hình được huấn luyện trên tập dữ liệu tiếng Việt với mục tiêu cải thiện khả năng xử lý ngôn ngữ tiếng Việt và nâng cao hiệu suất cho các tác vụ truy xuất thông tin mở (Retrieval Augmented Generation - RAG).
**Mục đích sử dụng:**
Mô hình vi-gemma-2b-RAG phù hợp cho các tác vụ sau:
* Trả lời câu hỏi dựa trên ngữ cảnh tiếng Việt.
* Tóm tắt văn bản tiếng Việt.
* Dịch máy tiếng Việt.
* Và các tác vụ tạo văn bản tiếng Việt khác.
**Giới hạn:**
Mặc dù đã được tinh chỉnh cho tiếng Việt, vi-gemma-2b-RAG vẫn có thể gặp phải một số hạn chế:
* Có thể tạo ra thông tin sai lệch hoặc không chính xác.
* Có thể thể hiện thành kiến hoặc quan điểm không phù hợp.
* Hiệu suất có thể bị ảnh hưởng bởi chất lượng của dữ liệu đầu vào.
**Cách sử dụng:**
Dưới đây chúng tôi chia sẻ một số đoạn mã về cách bắt đầu nhanh chóng để sử dụng mô hình. Trước tiên, hãy đảm bảo đã cài đặt `pip install -U transformers`, sau đó sao chép đoạn mã từ phần có liên quan đến usecase của bạn.
Chúng tôi khuyến nghị sử dụng `torch.bfloat16` làm mặc định.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Khởi tạo tokenizer và model từ checkpoint đã lưu
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Sử dụng GPU nếu có
if torch.cuda.is_available():
model.to("cuda")
# Định dạng prompt cho model
prompt = """
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
{}
Hãy trả lời câu hỏi: {}
### Response:
{}
"""
# Chuẩn bị dữ liệu đầu vào
input_data = """
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
"""
query = "Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?"
# Định dạng input text
input_text = prompt.format(input_data, query," ")
# Mã hóa input text thành input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Sử dụng GPU cho input ids nếu có
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Tạo văn bản bằng model
outputs = model.generate(
**input_ids,
max_new_tokens=500,
no_repeat_ngram_size=5, # Ngăn chặn lặp lại các cụm từ 5 gram
# do_sample=True, # Kích hoạt chế độ tạo văn bản dựa trên lấy mẫu. Trong chế độ này, model sẽ chọn ngẫu nhiên token tiếp theo dựa trên xác suất được tính từ phân phối xác suất của các token.
# temperature=0.7, # Giảm temperature để kiểm soát tính ngẫu nhiên
# early_stopping=True, # Dừng tạo văn bản khi tìm thấy kết thúc phù hợp
)
# Giải mã và in kết quả
print(tokenizer.decode(outputs[0]))
'''
<bos>
### Instruction and Input:
Dựa vào ngữ cảnh/tài liệu sau:
Short Tandem Repeats (STRs) là các trình tự DNA lặp lại ngắn (2- 6 nucleotides) xuất hiện phổ biến trong hệ gen của con người. Các trình tự này có tính đa hình rất cao trong tự nhiên, điều này khiến các STRs trở thành những markers di truyền rất quan trọng trong nghiên cứu bản đồ gen người và chuẩn đoán bệnh lý di truyền cũng như xác định danh tính trong lĩnh vực pháp y.
Các STRs trở nên phổ biến tại các phòng xét nghiệm pháp y bởi vì việc nhân bản và phân tích STRs chỉ cần lượng DNA rất thấp ngay cả khi ở dạng bị phân hủy việc đinh danh vẫn có thể được thực hiện thành công. Hơn nữa việc phát hiện và đánh giá sự nhiễm DNA mẫu trong các mẫu vật có thể được giải quyết nhanh với kết quả phân tích STRs. Ở Hoa Kỳ hiện nay, từ bộ 13 markers nay đã tăng lên 20 markers chính đang được sử dụng để tạo ra một cơ sở dữ liệu DNA trên toàn đất nước được gọi là The FBI Combined DNA Index System (Expaned CODIS).
CODIS và các cơ sử dữ liệu DNA tương tự đang được sử dụng thực sự thành công trong việc liên kết các hồ sơ DNA từ các tội phạm và các bằng chứng hiện trường vụ án. Kết quả định danh STRs cũng được sử dụng để hỗ trợ hàng trăm nghìn trường hợp xét nghiệm huyết thống cha con mỗi năm'
Hãy trả lời câu hỏi: Hãy cho tôi biết một số tính chất của STRs được dùng để làm gì?
### Response:
STRs được sử dụng để xác định danh tính, chuẩn đoán bệnh lý và xác định bệnh lý di truyền.
<eos>
'''
```
**Huấn luyện:**
* **Mô hình cơ sở:** google/gemma-1.1-2b-it
* **Tập dữ liệu:** lamhieu/mabrycodes_dialogue_vi
* **Phương pháp tinh chỉnh:** LoRA, PEFT với Unsloth
## Model Card: vi-gemma-2b-RAG
### English
**Model Description:**
vi-gemma-2b-RAG is a large language model fine-tuned from the base model [google/gemma-1.1-2b-it](https://huggingface.co/google/gemma-1.1-2b-it) using LoRA. The model is trained on a Vietnamese dataset to improve its Vietnamese language processing capabilities and enhance its performance for Retrieval Augmented Generation (RAG) tasks.
**Intended Use:**
The vi-gemma-2b-RAG model is suitable for tasks such as:
* Vietnamese question answering.
* Vietnamese text summarization.
* Vietnamese machine translation.
* And other Vietnamese text generation tasks.
**Limitations:**
While fine-tuned for Vietnamese, vi-gemma-2b-RAG may still have some limitations:
* It may generate incorrect or misleading information.
* It may exhibit biases or inappropriate opinions.
* Its performance may be affected by the quality of the input data.
**How to Use:**
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
We recommend `torch.bfloat16` as the default dtype.
```python
# pip install transformers torch accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Initialize the tokenizer and model from the saved checkpoint
tokenizer = AutoTokenizer.from_pretrained("himmeow/vi-gemma-2b-RAG")
model = AutoModelForCausalLM.from_pretrained(
"himmeow/vi-gemma-2b-RAG",
device_map="auto",
torch_dtype=torch.bfloat16
)
# Use GPU if available
if torch.cuda.is_available():
model.to("cuda")
# Define the prompt format for the model
prompt = """
### Instruction and Input:
Based on the following context/document:
{}
Please answer the question: {}
### Response:
{}
"""
# Prepare the input data
input_data = """
Short Tandem Repeats (STRs) are short (2-6 nucleotides) repeating DNA sequences that are widespread in the human genome. These sequences are highly polymorphic in nature, which makes STRs very important genetic markers in human gene mapping and diagnosis of hereditary diseases as well as identification in the field of forensics.
STRs have become popular in forensic laboratories because the replication and analysis of STRs requires very small amounts of DNA, even in decomposed form, identification can still be performed successfully. Furthermore, the detection and assessment of sample DNA contamination in specimens can be quickly resolved with STR analysis results. In the United States today, the set of 13 markers has now been increased to 20 main markers being used to create a nationwide DNA database called The FBI Combined DNA Index System (Expaned CODIS).
CODIS and similar DNA databases are being used very successfully in linking DNA records from criminals and crime scene evidence. STR identification results are also used to support hundreds of thousands of paternity test cases each year.'
"""
query = "Tell me what are some properties of STRs used for?"
# Format the input text
input_text = prompt.format(input_data, query," ")
# Encode the input text into input ids
input_ids = tokenizer(input_text, return_tensors="pt")
# Use GPU for input ids if available
if torch.cuda.is_available():
input_ids = input_ids.to("cuda")
# Generate text using the model
outputs = model.generate(
**input_ids,
max_new_tokens=500, # Limit the number of tokens generated
no_repeat_ngram_size=5, # Prevent repetition of 5-gram phrases
# do_sample=True,
# temperature=0.7, # Adjust the randomness of the generated text
# early_stopping=True, # Stop generating text when a suitable ending is found
)
# Decode and print the results
print(tokenizer.decode(outputs[0]))
```
**Training:**
* **Base Model:** google/gemma-1.1-2b-it
* **Dataset:** lamhieu/mabrycodes_dialogue_vi
* **Fine-tuning Method:** LoRA, PEFT and Unsloth
**Using example repository:** https://github.com/Martincrux/Vietnamese-RAG-system-building-with-vi-gemma-2b-RAG-and-halong_embedding
# Uploaded model
- **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://viblo.asia/u/MartinCrux), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-1.1-2b-it-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | [
"CHIA"
] |
knowledgator/gliner-poly-small-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"dataset:numind/NuNER",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"license:apache-2.0",
"region:us"
] | 2024-08-19T12:40:53 | 2024-08-25T11:38:05 | 32 | 14 | ---
datasets:
- urchade/pile-mistral-v0.1
- numind/NuNER
- knowledgator/GLINER-multi-task-synthetic-data
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
---
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibility, are costly and large for resource-constrained scenarios.
This particular version utilize bi-encoder architecture with post-fusion, where textual encoder is [DeBERTa v3 small](microsoft/deberta-v3-small) and entity label encoder is sentence transformer - [BGE-small-en](https://huggingface.co/BAAI/bge-small-en-v1.5).
Such architecture brings several advantages over uni-encoder GLiNER:
* An unlimited amount of entities can be recognized at a single time;
* Faster inference if entity embeddings are preprocessed;
* Better generalization to unseen entities;
Post fusion strategy brings advantages over classical bi-encoder enabling better inter-label understanding.
### Installation & Usage
Install or update the gliner package:
```bash
pip install gliner -U
```
Once you've downloaded the GLiNER library, you can import the GLiNER class. You can then load this model using `GLiNER.from_pretrained` and predict entities with `predict_entities`.
```python
from gliner import GLiNER
model = GLiNER.from_pretrained("knowledgator/gliner-poly-small-v1.0")
text = """
Cristiano Ronaldo dos Santos Aveiro (Portuguese pronunciation: [kɾiʃˈtjɐnu ʁɔˈnaldu]; born 5 February 1985) is a Portuguese professional footballer who plays as a forward for and captains both Saudi Pro League club Al Nassr and the Portugal national team. Widely regarded as one of the greatest players of all time, Ronaldo has won five Ballon d'Or awards,[note 3] a record three UEFA Men's Player of the Year Awards, and four European Golden Shoes, the most by a European player. He has won 33 trophies in his career, including seven league titles, five UEFA Champions Leagues, the UEFA European Championship and the UEFA Nations League. Ronaldo holds the records for most appearances (183), goals (140) and assists (42) in the Champions League, goals in the European Championship (14), international goals (128) and international appearances (205). He is one of the few players to have made over 1,200 professional career appearances, the most by an outfield player, and has scored over 850 official senior career goals for club and country, making him the top goalscorer of all time.
"""
labels = ["person", "award", "date", "competitions", "teams"]
entities = model.predict_entities(text, labels, threshold=0.25)
for entity in entities:
print(entity["text"], "=>", entity["label"])
```
```
Cristiano Ronaldo dos Santos Aveiro => person
5 February 1985 => date
Al Nassr => teams
Portugal national team => teams
Ballon d'Or => award
UEFA Men's Player of the Year Awards => award
European Golden Shoes => award
UEFA Champions Leagues => competitions
UEFA European Championship => competitions
UEFA Nations League => competitions
Champions League => competitions
European Championship => competitions
```
If you have a large amount of entities and want to pre-embed them, please, refer to the following code snippet:
```python
labels = ["your entities"]
texts = ["your texts"]
entity_embeddings = model.encode_labels(labels, batch_size = 8)
outputs = model.batch_predict_with_embeds([text], entity_embeddings, labels)
```
### Benchmarks
Below you can see the table with benchmarking results on various named entity recognition datasets:
| Dataset | Score |
|---------|-------|
| ACE 2004 | 25.4% |
| ACE 2005 | 27.2% |
| AnatEM | 17.7% |
| Broad Tweet Corpus | 70.2% |
| CoNLL 2003 | 67.8% |
| FabNER | 22.9% |
| FindVehicle | 40.2% |
| GENIA_NER | 47.7% |
| HarveyNER | 15.5% |
| MultiNERD | 64.5% |
| Ontonotes | 28.7% |
| PolyglotNER | 47.5% |
| TweetNER7 | 39.3% |
| WikiANN en | 56.7% |
| WikiNeural | 80.0% |
| bc2gm | 56.2% |
| bc4chemd | 48.7% |
| bc5cdr | 60.5% |
| ncbi | 53.5% |
| **Average** | **45.8%** |
|||
| CrossNER_AI | 48.9% |
| CrossNER_literature | 64.0% |
| CrossNER_music | 68.7% |
| CrossNER_politics | 69.0% |
| CrossNER_science | 62.7% |
| mit-movie | 40.3% |
| mit-restaurant | 36.2% |
| **Average (zero-shot benchmark)** | **55.7%** |
### Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models. Join [Discord](https://discord.gg/dkyeAgs9DG). | [
"NAMED_ENTITY_RECOGNITION"
] | [
"ANATEM",
"BC5CDR"
] |
billatsectorflow/stella_en_1.5B_v5 | billatsectorflow | sentence-similarity | [
"sentence-transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"mteb",
"transformers",
"sentence-similarity",
"custom_code",
"arxiv:2205.13147",
"license:mit",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-22T11:17:38 | 2025-01-22T11:25:36 | 32 | 2 | ---
license: mit
tags:
- mteb
- sentence-transformers
- transformers
- sentence-similarity
model-index:
- name: stella_en_1.5B_v5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 92.86567164179104
- type: ap
value: 72.13503907102613
- type: ap_weighted
value: 72.13503907102613
- type: f1
value: 89.5586886376355
- type: f1_weighted
value: 93.13621183004571
- type: main_score
value: 92.86567164179104
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.16485
- type: ap
value: 96.05546315415225
- type: ap_weighted
value: 96.05546315415225
- type: f1
value: 97.16351087403213
- type: f1_weighted
value: 97.16351087403213
- type: main_score
value: 97.16485
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 59.358
- type: f1
value: 59.0264615883114
- type: f1_weighted
value: 59.0264615883114
- type: main_score
value: 59.358
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: main_score
value: 65.269
- type: map_at_1
value: 41.607
- type: map_at_10
value: 57.104
- type: map_at_100
value: 57.621
- type: map_at_1000
value: 57.621
- type: map_at_20
value: 57.533
- type: map_at_3
value: 52.891999999999996
- type: map_at_5
value: 55.371
- type: mrr_at_1
value: 42.318634423897585
- type: mrr_at_10
value: 57.353970511865406
- type: mrr_at_100
value: 57.88398078476526
- type: mrr_at_1000
value: 57.88467807648422
- type: mrr_at_20
value: 57.796730533206166
- type: mrr_at_3
value: 53.200568990042775
- type: mrr_at_5
value: 55.6330014224753
- type: nauc_map_at_1000_diff1
value: 24.54414600428287
- type: nauc_map_at_1000_max
value: -8.389738078358459
- type: nauc_map_at_1000_std
value: -18.188787645801366
- type: nauc_map_at_100_diff1
value: 24.543138576462308
- type: nauc_map_at_100_max
value: -8.390896839752044
- type: nauc_map_at_100_std
value: -18.192549240185247
- type: nauc_map_at_10_diff1
value: 24.219607088995822
- type: nauc_map_at_10_max
value: -8.245734391254308
- type: nauc_map_at_10_std
value: -18.229706566466447
- type: nauc_map_at_1_diff1
value: 29.325201664812788
- type: nauc_map_at_1_max
value: -11.742800494823971
- type: nauc_map_at_1_std
value: -18.610215769702528
- type: nauc_map_at_20_diff1
value: 24.471097562798803
- type: nauc_map_at_20_max
value: -8.318035874000799
- type: nauc_map_at_20_std
value: -18.171541096773108
- type: nauc_map_at_3_diff1
value: 24.275846107642824
- type: nauc_map_at_3_max
value: -8.212242049581894
- type: nauc_map_at_3_std
value: -17.920379368937496
- type: nauc_map_at_5_diff1
value: 23.873692493209255
- type: nauc_map_at_5_max
value: -8.110347163828767
- type: nauc_map_at_5_std
value: -18.20863325596931
- type: nauc_mrr_at_1000_diff1
value: 22.656410956419975
- type: nauc_mrr_at_1000_max
value: -8.924888102233243
- type: nauc_mrr_at_1000_std
value: -18.103674384502526
- type: nauc_mrr_at_100_diff1
value: 22.655448817140968
- type: nauc_mrr_at_100_max
value: -8.926034318499038
- type: nauc_mrr_at_100_std
value: -18.10743930104164
- type: nauc_mrr_at_10_diff1
value: 22.297536272996872
- type: nauc_mrr_at_10_max
value: -8.836407556658274
- type: nauc_mrr_at_10_std
value: -18.1598393044477
- type: nauc_mrr_at_1_diff1
value: 27.419572424489708
- type: nauc_mrr_at_1_max
value: -11.42241314820691
- type: nauc_mrr_at_1_std
value: -18.54893865856313
- type: nauc_mrr_at_20_diff1
value: 22.590227214657418
- type: nauc_mrr_at_20_max
value: -8.849986456376993
- type: nauc_mrr_at_20_std
value: -18.0862391777352
- type: nauc_mrr_at_3_diff1
value: 22.415270167774988
- type: nauc_mrr_at_3_max
value: -8.692871854156435
- type: nauc_mrr_at_3_std
value: -17.6740102891955
- type: nauc_mrr_at_5_diff1
value: 21.96284578521464
- type: nauc_mrr_at_5_max
value: -8.757031535546025
- type: nauc_mrr_at_5_std
value: -18.210766964081294
- type: nauc_ndcg_at_1000_diff1
value: 23.939400161569115
- type: nauc_ndcg_at_1000_max
value: -7.866999120512983
- type: nauc_ndcg_at_1000_std
value: -17.981457019643617
- type: nauc_ndcg_at_100_diff1
value: 23.920033349619317
- type: nauc_ndcg_at_100_max
value: -7.889849409678031
- type: nauc_ndcg_at_100_std
value: -18.054931990360537
- type: nauc_ndcg_at_10_diff1
value: 22.543020461303534
- type: nauc_ndcg_at_10_max
value: -7.072111788010867
- type: nauc_ndcg_at_10_std
value: -18.26397604573537
- type: nauc_ndcg_at_1_diff1
value: 29.325201664812788
- type: nauc_ndcg_at_1_max
value: -11.742800494823971
- type: nauc_ndcg_at_1_std
value: -18.610215769702528
- type: nauc_ndcg_at_20_diff1
value: 23.551587021207972
- type: nauc_ndcg_at_20_max
value: -7.298056222649139
- type: nauc_ndcg_at_20_std
value: -18.056004880930608
- type: nauc_ndcg_at_3_diff1
value: 22.669089506345273
- type: nauc_ndcg_at_3_max
value: -7.278024373570137
- type: nauc_ndcg_at_3_std
value: -17.816657759914193
- type: nauc_ndcg_at_5_diff1
value: 21.72619728226575
- type: nauc_ndcg_at_5_max
value: -6.959741647471228
- type: nauc_ndcg_at_5_std
value: -18.35173705190235
- type: nauc_precision_at_1000_diff1
value: 5.0388241058076995
- type: nauc_precision_at_1000_max
value: 34.439879624882145
- type: nauc_precision_at_1000_std
value: 77.22610895194498
- type: nauc_precision_at_100_diff1
value: 1.340670767252794
- type: nauc_precision_at_100_max
value: 19.30870025961241
- type: nauc_precision_at_100_std
value: 35.37688289157788
- type: nauc_precision_at_10_diff1
value: 7.734227153124332
- type: nauc_precision_at_10_max
value: 4.202399088422237
- type: nauc_precision_at_10_std
value: -18.383890254046698
- type: nauc_precision_at_1_diff1
value: 29.325201664812788
- type: nauc_precision_at_1_max
value: -11.742800494823971
- type: nauc_precision_at_1_std
value: -18.610215769702528
- type: nauc_precision_at_20_diff1
value: 9.48070999361637
- type: nauc_precision_at_20_max
value: 19.056709637253025
- type: nauc_precision_at_20_std
value: -13.266821166159485
- type: nauc_precision_at_3_diff1
value: 17.245260303409747
- type: nauc_precision_at_3_max
value: -4.202455033452335
- type: nauc_precision_at_3_std
value: -17.514264039955332
- type: nauc_precision_at_5_diff1
value: 12.074628162049974
- type: nauc_precision_at_5_max
value: -1.9145501461107832
- type: nauc_precision_at_5_std
value: -19.162525528916344
- type: nauc_recall_at_1000_diff1
value: 5.038824105805915
- type: nauc_recall_at_1000_max
value: 34.43987962487738
- type: nauc_recall_at_1000_std
value: 77.22610895193765
- type: nauc_recall_at_100_diff1
value: 1.3406707672497025
- type: nauc_recall_at_100_max
value: 19.30870025960776
- type: nauc_recall_at_100_std
value: 35.37688289157515
- type: nauc_recall_at_10_diff1
value: 7.734227153124366
- type: nauc_recall_at_10_max
value: 4.202399088421976
- type: nauc_recall_at_10_std
value: -18.38389025404673
- type: nauc_recall_at_1_diff1
value: 29.325201664812788
- type: nauc_recall_at_1_max
value: -11.742800494823971
- type: nauc_recall_at_1_std
value: -18.610215769702528
- type: nauc_recall_at_20_diff1
value: 9.480709993616845
- type: nauc_recall_at_20_max
value: 19.05670963725301
- type: nauc_recall_at_20_std
value: -13.266821166158651
- type: nauc_recall_at_3_diff1
value: 17.24526030340978
- type: nauc_recall_at_3_max
value: -4.202455033452323
- type: nauc_recall_at_3_std
value: -17.51426403995538
- type: nauc_recall_at_5_diff1
value: 12.074628162049992
- type: nauc_recall_at_5_max
value: -1.914550146110865
- type: nauc_recall_at_5_std
value: -19.162525528916362
- type: ndcg_at_1
value: 41.607
- type: ndcg_at_10
value: 65.269
- type: ndcg_at_100
value: 67.289
- type: ndcg_at_1000
value: 67.29899999999999
- type: ndcg_at_20
value: 66.76299999999999
- type: ndcg_at_3
value: 56.604
- type: ndcg_at_5
value: 61.07900000000001
- type: precision_at_1
value: 41.607
- type: precision_at_10
value: 9.118
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_20
value: 4.8469999999999995
- type: precision_at_3
value: 22.451
- type: precision_at_5
value: 15.647
- type: recall_at_1
value: 41.607
- type: recall_at_10
value: 91.181
- type: recall_at_100
value: 99.57300000000001
- type: recall_at_1000
value: 99.644
- type: recall_at_20
value: 96.942
- type: recall_at_3
value: 67.354
- type: recall_at_5
value: 78.236
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: main_score
value: 55.437138353189994
- type: v_measure
value: 55.437138353189994
- type: v_measure_std
value: 14.718556601335491
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: main_score
value: 50.65858459544658
- type: v_measure
value: 50.65858459544658
- type: v_measure_std
value: 14.887033747525146
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: main_score
value: 67.32597152838535
- type: map
value: 67.32597152838535
- type: mrr
value: 78.98683111286988
- type: nAUC_map_diff1
value: 16.8624639710487
- type: nAUC_map_max
value: 24.91996491142433
- type: nAUC_map_std
value: 17.91865808793225
- type: nAUC_mrr_diff1
value: 25.03766425631947
- type: nAUC_mrr_max
value: 41.64561939958336
- type: nAUC_mrr_std
value: 23.179909345891968
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cosine_pearson
value: 85.790820496042
- type: cosine_spearman
value: 83.10731534330517
- type: euclidean_pearson
value: 84.61741304343133
- type: euclidean_spearman
value: 83.17297949010973
- type: main_score
value: 83.10731534330517
- type: manhattan_pearson
value: 85.2137696526676
- type: manhattan_spearman
value: 84.39168195786738
- type: pearson
value: 85.790820496042
- type: spearman
value: 83.10731534330517
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 89.78896103896105
- type: f1
value: 89.76107366333488
- type: f1_weighted
value: 89.76107366333488
- type: main_score
value: 89.78896103896105
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: main_score
value: 50.68092296236376
- type: v_measure
value: 50.68092296236376
- type: v_measure_std
value: 0.7832640983085436
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: main_score
value: 46.86629236732983
- type: v_measure
value: 46.86629236732983
- type: v_measure_std
value: 0.8784322236350974
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: main_score
value: 47.74883333333334
- type: map_at_1
value: 30.179249999999996
- type: map_at_10
value: 41.60824999999999
- type: map_at_100
value: 42.94008333333332
- type: map_at_1000
value: 43.04666666666667
- type: map_at_20
value: 42.36833333333334
- type: map_at_3
value: 38.23491666666666
- type: map_at_5
value: 40.10183333333333
- type: mrr_at_1
value: 36.47676085808166
- type: mrr_at_10
value: 46.300991916437155
- type: mrr_at_100
value: 47.12155753713262
- type: mrr_at_1000
value: 47.168033610799945
- type: mrr_at_20
value: 46.80405724560391
- type: mrr_at_3
value: 43.77000352801797
- type: mrr_at_5
value: 45.22295361704542
- type: nauc_map_at_1000_diff1
value: 46.953671666941524
- type: nauc_map_at_1000_max
value: 32.260396316089675
- type: nauc_map_at_1000_std
value: 0.6657766120094878
- type: nauc_map_at_100_diff1
value: 46.94717463394555
- type: nauc_map_at_100_max
value: 32.25088350678177
- type: nauc_map_at_100_std
value: 0.6257017014549283
- type: nauc_map_at_10_diff1
value: 46.974678429336464
- type: nauc_map_at_10_max
value: 31.862230807295504
- type: nauc_map_at_10_std
value: -0.14758828549579284
- type: nauc_map_at_1_diff1
value: 52.48913346466124
- type: nauc_map_at_1_max
value: 29.874374024967725
- type: nauc_map_at_1_std
value: -2.433547569836134
- type: nauc_map_at_20_diff1
value: 46.96088684217651
- type: nauc_map_at_20_max
value: 32.08954208613205
- type: nauc_map_at_20_std
value: 0.25946321113436527
- type: nauc_map_at_3_diff1
value: 47.703230121518345
- type: nauc_map_at_3_max
value: 30.977880095983107
- type: nauc_map_at_3_std
value: -1.342777563991804
- type: nauc_map_at_5_diff1
value: 47.1615010199957
- type: nauc_map_at_5_max
value: 31.420885812683284
- type: nauc_map_at_5_std
value: -0.8789297099444306
- type: nauc_mrr_at_1000_diff1
value: 46.69178645962615
- type: nauc_mrr_at_1000_max
value: 34.392807413340655
- type: nauc_mrr_at_1000_std
value: 1.6155464863667934
- type: nauc_mrr_at_100_diff1
value: 46.67417236349189
- type: nauc_mrr_at_100_max
value: 34.384607045512624
- type: nauc_mrr_at_100_std
value: 1.6259917384109652
- type: nauc_mrr_at_10_diff1
value: 46.60497560446239
- type: nauc_mrr_at_10_max
value: 34.32918897817958
- type: nauc_mrr_at_10_std
value: 1.39387793769014
- type: nauc_mrr_at_1_diff1
value: 51.61608573254137
- type: nauc_mrr_at_1_max
value: 35.18105023234596
- type: nauc_mrr_at_1_std
value: 0.17943702145478177
- type: nauc_mrr_at_20_diff1
value: 46.635943069860254
- type: nauc_mrr_at_20_max
value: 34.37050973118794
- type: nauc_mrr_at_20_std
value: 1.5346464678860607
- type: nauc_mrr_at_3_diff1
value: 47.154389369038334
- type: nauc_mrr_at_3_max
value: 34.41036411855465
- type: nauc_mrr_at_3_std
value: 0.924551812357872
- type: nauc_mrr_at_5_diff1
value: 46.6690101691763
- type: nauc_mrr_at_5_max
value: 34.29740388138466
- type: nauc_mrr_at_5_std
value: 1.0567184149139792
- type: nauc_ndcg_at_1000_diff1
value: 45.375448289173264
- type: nauc_ndcg_at_1000_max
value: 33.47957083714482
- type: nauc_ndcg_at_1000_std
value: 3.192251100225568
- type: nauc_ndcg_at_100_diff1
value: 44.93601014699499
- type: nauc_ndcg_at_100_max
value: 33.21249888295249
- type: nauc_ndcg_at_100_std
value: 3.609842852934217
- type: nauc_ndcg_at_10_diff1
value: 44.87893284011915
- type: nauc_ndcg_at_10_max
value: 32.384885249478515
- type: nauc_ndcg_at_10_std
value: 1.454493065035396
- type: nauc_ndcg_at_1_diff1
value: 51.61608573254137
- type: nauc_ndcg_at_1_max
value: 35.18105023234596
- type: nauc_ndcg_at_1_std
value: 0.17943702145478177
- type: nauc_ndcg_at_20_diff1
value: 44.867752179050605
- type: nauc_ndcg_at_20_max
value: 32.689535921840196
- type: nauc_ndcg_at_20_std
value: 2.337765158573901
- type: nauc_ndcg_at_3_diff1
value: 45.87485821381341
- type: nauc_ndcg_at_3_max
value: 32.33282450558947
- type: nauc_ndcg_at_3_std
value: 0.0681643829273283
- type: nauc_ndcg_at_5_diff1
value: 45.202902131892394
- type: nauc_ndcg_at_5_max
value: 32.1026971523917
- type: nauc_ndcg_at_5_std
value: 0.3565572833774486
- type: nauc_precision_at_1000_diff1
value: -8.935267931198956
- type: nauc_precision_at_1000_max
value: 6.464981960169269
- type: nauc_precision_at_1000_std
value: 10.662786182234633
- type: nauc_precision_at_100_diff1
value: -1.64091517847155
- type: nauc_precision_at_100_max
value: 15.175617871025024
- type: nauc_precision_at_100_std
value: 16.924256989248075
- type: nauc_precision_at_10_diff1
value: 15.676651966277047
- type: nauc_precision_at_10_max
value: 26.243734188847117
- type: nauc_precision_at_10_std
value: 10.601741034956333
- type: nauc_precision_at_1_diff1
value: 51.61608573254137
- type: nauc_precision_at_1_max
value: 35.18105023234596
- type: nauc_precision_at_1_std
value: 0.17943702145478177
- type: nauc_precision_at_20_diff1
value: 9.447267260198654
- type: nauc_precision_at_20_max
value: 23.024130858142723
- type: nauc_precision_at_20_std
value: 13.739145648899603
- type: nauc_precision_at_3_diff1
value: 30.11583572134629
- type: nauc_precision_at_3_max
value: 31.37321080069495
- type: nauc_precision_at_3_std
value: 4.705512374126024
- type: nauc_precision_at_5_diff1
value: 23.192015335996093
- type: nauc_precision_at_5_max
value: 29.415746835998764
- type: nauc_precision_at_5_std
value: 6.843498772798558
- type: nauc_recall_at_1000_diff1
value: 25.36573313426033
- type: nauc_recall_at_1000_max
value: 43.06672256524168
- type: nauc_recall_at_1000_std
value: 47.93664853815292
- type: nauc_recall_at_100_diff1
value: 31.222880916617406
- type: nauc_recall_at_100_max
value: 31.761159904172658
- type: nauc_recall_at_100_std
value: 23.034218976635877
- type: nauc_recall_at_10_diff1
value: 36.23439028915225
- type: nauc_recall_at_10_max
value: 28.473458977606438
- type: nauc_recall_at_10_std
value: 3.7797969934159
- type: nauc_recall_at_1_diff1
value: 52.48913346466124
- type: nauc_recall_at_1_max
value: 29.874374024967725
- type: nauc_recall_at_1_std
value: -2.433547569836134
- type: nauc_recall_at_20_diff1
value: 34.678676952584766
- type: nauc_recall_at_20_max
value: 29.04638392522168
- type: nauc_recall_at_20_std
value: 8.148894982082549
- type: nauc_recall_at_3_diff1
value: 41.31029996231311
- type: nauc_recall_at_3_max
value: 28.44199443414157
- type: nauc_recall_at_3_std
value: -0.747324057600377
- type: nauc_recall_at_5_diff1
value: 38.535873899920674
- type: nauc_recall_at_5_max
value: 27.942667805948375
- type: nauc_recall_at_5_std
value: 0.30652206930973686
- type: ndcg_at_1
value: 36.47675
- type: ndcg_at_10
value: 47.74883333333334
- type: ndcg_at_100
value: 52.902416666666674
- type: ndcg_at_1000
value: 54.69116666666667
- type: ndcg_at_20
value: 49.89758333333333
- type: ndcg_at_3
value: 42.462250000000004
- type: ndcg_at_5
value: 44.91841666666667
- type: precision_at_1
value: 36.47675
- type: precision_at_10
value: 8.582416666666665
- type: precision_at_100
value: 1.31475
- type: precision_at_1000
value: 0.16458333333333333
- type: precision_at_20
value: 5.021833333333333
- type: precision_at_3
value: 20.004499999999997
- type: precision_at_5
value: 14.178666666666665
- type: recall_at_1
value: 30.179249999999996
- type: recall_at_10
value: 60.950166666666675
- type: recall_at_100
value: 83.19025
- type: recall_at_1000
value: 95.27774999999998
- type: recall_at_20
value: 68.80175
- type: recall_at_3
value: 46.01841666666666
- type: recall_at_5
value: 52.482416666666666
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: main_score
value: 46.113
- type: map_at_1
value: 20.122999999999998
- type: map_at_10
value: 35.474
- type: map_at_100
value: 37.592
- type: map_at_1000
value: 37.773
- type: map_at_20
value: 36.637
- type: map_at_3
value: 29.731
- type: map_at_5
value: 32.964
- type: mrr_at_1
value: 46.71009771986971
- type: mrr_at_10
value: 58.855669303552105
- type: mrr_at_100
value: 59.389249674038425
- type: mrr_at_1000
value: 59.408448104362364
- type: mrr_at_20
value: 59.23881203149016
- type: mrr_at_3
value: 56.18892508143328
- type: mrr_at_5
value: 57.85342019543985
- type: nauc_map_at_1000_diff1
value: 27.047031037721958
- type: nauc_map_at_1000_max
value: 43.25240279148033
- type: nauc_map_at_1000_std
value: 20.795849418696037
- type: nauc_map_at_100_diff1
value: 27.044739015116452
- type: nauc_map_at_100_max
value: 43.24042159787812
- type: nauc_map_at_100_std
value: 20.799952124137683
- type: nauc_map_at_10_diff1
value: 27.372696854670338
- type: nauc_map_at_10_max
value: 43.054456574721684
- type: nauc_map_at_10_std
value: 19.537162110136645
- type: nauc_map_at_1_diff1
value: 43.65424623953092
- type: nauc_map_at_1_max
value: 45.17986509998762
- type: nauc_map_at_1_std
value: 8.497107052335414
- type: nauc_map_at_20_diff1
value: 27.224535846566074
- type: nauc_map_at_20_max
value: 43.12222854561229
- type: nauc_map_at_20_std
value: 20.29982972202669
- type: nauc_map_at_3_diff1
value: 30.87847002319001
- type: nauc_map_at_3_max
value: 42.890027891707575
- type: nauc_map_at_3_std
value: 13.857451947580929
- type: nauc_map_at_5_diff1
value: 27.966867093591542
- type: nauc_map_at_5_max
value: 42.35826637592201
- type: nauc_map_at_5_std
value: 16.993102524058624
- type: nauc_mrr_at_1000_diff1
value: 30.191544077608164
- type: nauc_mrr_at_1000_max
value: 44.959438920351644
- type: nauc_mrr_at_1000_std
value: 24.065801376465114
- type: nauc_mrr_at_100_diff1
value: 30.170368115494
- type: nauc_mrr_at_100_max
value: 44.955868115761156
- type: nauc_mrr_at_100_std
value: 24.093510767847707
- type: nauc_mrr_at_10_diff1
value: 30.128430637520175
- type: nauc_mrr_at_10_max
value: 44.97689261350708
- type: nauc_mrr_at_10_std
value: 24.037049561818897
- type: nauc_mrr_at_1_diff1
value: 35.323351939108214
- type: nauc_mrr_at_1_max
value: 43.85026244855636
- type: nauc_mrr_at_1_std
value: 17.040662141218974
- type: nauc_mrr_at_20_diff1
value: 30.192006556160443
- type: nauc_mrr_at_20_max
value: 45.02814530774032
- type: nauc_mrr_at_20_std
value: 24.20885865448696
- type: nauc_mrr_at_3_diff1
value: 29.88250163424518
- type: nauc_mrr_at_3_max
value: 44.25768944883186
- type: nauc_mrr_at_3_std
value: 22.804183393364198
- type: nauc_mrr_at_5_diff1
value: 30.269824490420767
- type: nauc_mrr_at_5_max
value: 44.97443265796657
- type: nauc_mrr_at_5_std
value: 23.894159916141177
- type: nauc_ndcg_at_1000_diff1
value: 24.533764005407356
- type: nauc_ndcg_at_1000_max
value: 44.50902713386608
- type: nauc_ndcg_at_1000_std
value: 27.589506980238404
- type: nauc_ndcg_at_100_diff1
value: 24.209785073940353
- type: nauc_ndcg_at_100_max
value: 44.18257063893669
- type: nauc_ndcg_at_100_std
value: 27.963150866401943
- type: nauc_ndcg_at_10_diff1
value: 25.168069201989486
- type: nauc_ndcg_at_10_max
value: 43.84940910683214
- type: nauc_ndcg_at_10_std
value: 24.810707270956435
- type: nauc_ndcg_at_1_diff1
value: 35.323351939108214
- type: nauc_ndcg_at_1_max
value: 43.85026244855636
- type: nauc_ndcg_at_1_std
value: 17.040662141218974
- type: nauc_ndcg_at_20_diff1
value: 24.829924800466834
- type: nauc_ndcg_at_20_max
value: 43.738574327059716
- type: nauc_ndcg_at_20_std
value: 26.252370278684072
- type: nauc_ndcg_at_3_diff1
value: 27.321943393906274
- type: nauc_ndcg_at_3_max
value: 42.16584786993447
- type: nauc_ndcg_at_3_std
value: 18.24775079455969
- type: nauc_ndcg_at_5_diff1
value: 26.043785418347998
- type: nauc_ndcg_at_5_max
value: 42.874593895388344
- type: nauc_ndcg_at_5_std
value: 21.294004555506117
- type: nauc_precision_at_1000_diff1
value: -22.073027615308582
- type: nauc_precision_at_1000_max
value: -6.549723766317357
- type: nauc_precision_at_1000_std
value: 18.301749191241306
- type: nauc_precision_at_100_diff1
value: -15.654286887593619
- type: nauc_precision_at_100_max
value: 6.401516251421999
- type: nauc_precision_at_100_std
value: 29.170680324929805
- type: nauc_precision_at_10_diff1
value: -4.362381972892247
- type: nauc_precision_at_10_max
value: 22.10943515872447
- type: nauc_precision_at_10_std
value: 31.869699459530022
- type: nauc_precision_at_1_diff1
value: 35.323351939108214
- type: nauc_precision_at_1_max
value: 43.85026244855636
- type: nauc_precision_at_1_std
value: 17.040662141218974
- type: nauc_precision_at_20_diff1
value: -7.50749661117875
- type: nauc_precision_at_20_max
value: 16.80584016023257
- type: nauc_precision_at_20_std
value: 31.976755897112437
- type: nauc_precision_at_3_diff1
value: 7.402667538773083
- type: nauc_precision_at_3_max
value: 31.2088401330676
- type: nauc_precision_at_3_std
value: 24.287905698405662
- type: nauc_precision_at_5_diff1
value: 0.7479172565343901
- type: nauc_precision_at_5_max
value: 26.28427734237825
- type: nauc_precision_at_5_std
value: 28.246947120310317
- type: nauc_recall_at_1000_diff1
value: 2.4778431086370496
- type: nauc_recall_at_1000_max
value: 40.2231995797509
- type: nauc_recall_at_1000_std
value: 52.62124052183862
- type: nauc_recall_at_100_diff1
value: 8.960962419741463
- type: nauc_recall_at_100_max
value: 35.81132850291491
- type: nauc_recall_at_100_std
value: 40.020903251786166
- type: nauc_recall_at_10_diff1
value: 15.603400751376636
- type: nauc_recall_at_10_max
value: 37.570127529136485
- type: nauc_recall_at_10_std
value: 28.07128410238545
- type: nauc_recall_at_1_diff1
value: 43.65424623953092
- type: nauc_recall_at_1_max
value: 45.17986509998762
- type: nauc_recall_at_1_std
value: 8.497107052335414
- type: nauc_recall_at_20_diff1
value: 13.844820282832346
- type: nauc_recall_at_20_max
value: 36.0106148516309
- type: nauc_recall_at_20_std
value: 31.453103910565254
- type: nauc_recall_at_3_diff1
value: 24.359328154117748
- type: nauc_recall_at_3_max
value: 39.93774251377568
- type: nauc_recall_at_3_std
value: 16.214921517509648
- type: nauc_recall_at_5_diff1
value: 18.75788451360292
- type: nauc_recall_at_5_max
value: 38.177646107055516
- type: nauc_recall_at_5_std
value: 22.17196825834675
- type: ndcg_at_1
value: 46.71
- type: ndcg_at_10
value: 46.113
- type: ndcg_at_100
value: 53.035
- type: ndcg_at_1000
value: 55.724
- type: ndcg_at_20
value: 48.929
- type: ndcg_at_3
value: 39.501999999999995
- type: ndcg_at_5
value: 41.792
- type: precision_at_1
value: 46.71
- type: precision_at_10
value: 14.274000000000001
- type: precision_at_100
value: 2.1870000000000003
- type: precision_at_1000
value: 0.269
- type: precision_at_20
value: 8.375
- type: precision_at_3
value: 29.881
- type: precision_at_5
value: 22.697
- type: recall_at_1
value: 20.122999999999998
- type: recall_at_10
value: 52.22
- type: recall_at_100
value: 75.388
- type: recall_at_1000
value: 89.938
- type: recall_at_20
value: 60.077000000000005
- type: recall_at_3
value: 35.150999999999996
- type: recall_at_5
value: 42.748000000000005
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: main_score
value: 52.276999999999994
- type: map_at_1
value: 9.949
- type: map_at_10
value: 24.891
- type: map_at_100
value: 37.111
- type: map_at_1000
value: 39.266
- type: map_at_20
value: 29.685
- type: map_at_3
value: 16.586000000000002
- type: map_at_5
value: 19.982
- type: mrr_at_1
value: 76.25
- type: mrr_at_10
value: 82.4518849206349
- type: mrr_at_100
value: 82.70302194564499
- type: mrr_at_1000
value: 82.70909729942254
- type: mrr_at_20
value: 82.60492765962964
- type: mrr_at_3
value: 81.33333333333331
- type: mrr_at_5
value: 82.14583333333331
- type: nauc_map_at_1000_diff1
value: 21.427201262456556
- type: nauc_map_at_1000_max
value: 35.357361590816076
- type: nauc_map_at_1000_std
value: 24.785419223353717
- type: nauc_map_at_100_diff1
value: 22.82358692021537
- type: nauc_map_at_100_max
value: 35.07399692072945
- type: nauc_map_at_100_std
value: 22.679878828987025
- type: nauc_map_at_10_diff1
value: 26.491769223479643
- type: nauc_map_at_10_max
value: 20.78079385443902
- type: nauc_map_at_10_std
value: -4.910406292079661
- type: nauc_map_at_1_diff1
value: 35.20851030208876
- type: nauc_map_at_1_max
value: 5.783003346365858
- type: nauc_map_at_1_std
value: -21.11679133835354
- type: nauc_map_at_20_diff1
value: 24.80097499300491
- type: nauc_map_at_20_max
value: 26.807021360774975
- type: nauc_map_at_20_std
value: 4.793103995429955
- type: nauc_map_at_3_diff1
value: 29.238193458890173
- type: nauc_map_at_3_max
value: 10.300839972189456
- type: nauc_map_at_3_std
value: -17.889666731981592
- type: nauc_map_at_5_diff1
value: 28.773624870573926
- type: nauc_map_at_5_max
value: 14.951435645422887
- type: nauc_map_at_5_std
value: -13.319697827173565
- type: nauc_mrr_at_1000_diff1
value: 55.232544856708785
- type: nauc_mrr_at_1000_max
value: 64.73225637682637
- type: nauc_mrr_at_1000_std
value: 37.57480399594188
- type: nauc_mrr_at_100_diff1
value: 55.219251601773735
- type: nauc_mrr_at_100_max
value: 64.73305063663611
- type: nauc_mrr_at_100_std
value: 37.56458562909293
- type: nauc_mrr_at_10_diff1
value: 55.123463838253464
- type: nauc_mrr_at_10_max
value: 64.91914041040233
- type: nauc_mrr_at_10_std
value: 37.76482503851598
- type: nauc_mrr_at_1_diff1
value: 56.45461238513347
- type: nauc_mrr_at_1_max
value: 63.11782510293676
- type: nauc_mrr_at_1_std
value: 33.592561284868985
- type: nauc_mrr_at_20_diff1
value: 55.15401961460458
- type: nauc_mrr_at_20_max
value: 64.77145835613156
- type: nauc_mrr_at_20_std
value: 37.471561418305804
- type: nauc_mrr_at_3_diff1
value: 54.64387438697658
- type: nauc_mrr_at_3_max
value: 64.27618995019164
- type: nauc_mrr_at_3_std
value: 39.391637295269014
- type: nauc_mrr_at_5_diff1
value: 55.08702591239485
- type: nauc_mrr_at_5_max
value: 64.6071475650635
- type: nauc_mrr_at_5_std
value: 37.97185134269896
- type: nauc_ndcg_at_1000_diff1
value: 31.696698876400387
- type: nauc_ndcg_at_1000_max
value: 52.12183760001191
- type: nauc_ndcg_at_1000_std
value: 40.197596211778716
- type: nauc_ndcg_at_100_diff1
value: 33.253120193433666
- type: nauc_ndcg_at_100_max
value: 49.47167758554746
- type: nauc_ndcg_at_100_std
value: 32.643833139756204
- type: nauc_ndcg_at_10_diff1
value: 27.065541392580013
- type: nauc_ndcg_at_10_max
value: 45.83504281289289
- type: nauc_ndcg_at_10_std
value: 27.11739500732328
- type: nauc_ndcg_at_1_diff1
value: 49.42808250022517
- type: nauc_ndcg_at_1_max
value: 53.502615048520354
- type: nauc_ndcg_at_1_std
value: 27.17555908836708
- type: nauc_ndcg_at_20_diff1
value: 29.374791382330308
- type: nauc_ndcg_at_20_max
value: 43.91246842479055
- type: nauc_ndcg_at_20_std
value: 23.419410620550316
- type: nauc_ndcg_at_3_diff1
value: 26.71550354496204
- type: nauc_ndcg_at_3_max
value: 43.9641457892003
- type: nauc_ndcg_at_3_std
value: 27.320024167947686
- type: nauc_ndcg_at_5_diff1
value: 27.020654974589487
- type: nauc_ndcg_at_5_max
value: 46.130417266030584
- type: nauc_ndcg_at_5_std
value: 28.392009019010068
- type: nauc_precision_at_1000_diff1
value: -21.47455482181002
- type: nauc_precision_at_1000_max
value: -9.721907229236024
- type: nauc_precision_at_1000_std
value: -1.061132062651487
- type: nauc_precision_at_100_diff1
value: -12.35759246101943
- type: nauc_precision_at_100_max
value: 15.509512444892168
- type: nauc_precision_at_100_std
value: 36.21183578592014
- type: nauc_precision_at_10_diff1
value: -6.136998947343125
- type: nauc_precision_at_10_max
value: 32.30037906748288
- type: nauc_precision_at_10_std
value: 41.4500302476981
- type: nauc_precision_at_1_diff1
value: 56.45461238513347
- type: nauc_precision_at_1_max
value: 63.11782510293676
- type: nauc_precision_at_1_std
value: 33.592561284868985
- type: nauc_precision_at_20_diff1
value: -7.335890123683174
- type: nauc_precision_at_20_max
value: 28.31417075291312
- type: nauc_precision_at_20_std
value: 41.405935715061815
- type: nauc_precision_at_3_diff1
value: 7.117255890225942
- type: nauc_precision_at_3_max
value: 39.19894132683829
- type: nauc_precision_at_3_std
value: 38.48255841994843
- type: nauc_precision_at_5_diff1
value: 1.861523090114206
- type: nauc_precision_at_5_max
value: 38.11649223007208
- type: nauc_precision_at_5_std
value: 40.52993530374645
- type: nauc_recall_at_1000_diff1
value: 26.497648584314636
- type: nauc_recall_at_1000_max
value: 44.48069746734414
- type: nauc_recall_at_1000_std
value: 53.16438130228715
- type: nauc_recall_at_100_diff1
value: 26.353456899511446
- type: nauc_recall_at_100_max
value: 37.57379787884197
- type: nauc_recall_at_100_std
value: 29.197468295989548
- type: nauc_recall_at_10_diff1
value: 22.80445738351114
- type: nauc_recall_at_10_max
value: 15.895630778449046
- type: nauc_recall_at_10_std
value: -8.746224797644501
- type: nauc_recall_at_1_diff1
value: 35.20851030208876
- type: nauc_recall_at_1_max
value: 5.783003346365858
- type: nauc_recall_at_1_std
value: -21.11679133835354
- type: nauc_recall_at_20_diff1
value: 22.34028867678706
- type: nauc_recall_at_20_max
value: 21.42373427646772
- type: nauc_recall_at_20_std
value: 0.4533036151015875
- type: nauc_recall_at_3_diff1
value: 24.96853445599229
- type: nauc_recall_at_3_max
value: 6.245185375804208
- type: nauc_recall_at_3_std
value: -20.200240127099622
- type: nauc_recall_at_5_diff1
value: 24.749259476710623
- type: nauc_recall_at_5_max
value: 11.024592845995942
- type: nauc_recall_at_5_std
value: -16.15683085641543
- type: ndcg_at_1
value: 64.125
- type: ndcg_at_10
value: 52.276999999999994
- type: ndcg_at_100
value: 57.440000000000005
- type: ndcg_at_1000
value: 64.082
- type: ndcg_at_20
value: 51.383
- type: ndcg_at_3
value: 55.769000000000005
- type: ndcg_at_5
value: 53.978
- type: precision_at_1
value: 76.25
- type: precision_at_10
value: 43.05
- type: precision_at_100
value: 14.09
- type: precision_at_1000
value: 2.662
- type: precision_at_20
value: 33.112
- type: precision_at_3
value: 59.833000000000006
- type: precision_at_5
value: 53.05
- type: recall_at_1
value: 9.949
- type: recall_at_10
value: 30.424
- type: recall_at_100
value: 64.062
- type: recall_at_1000
value: 85.916
- type: recall_at_20
value: 39.895
- type: recall_at_3
value: 17.876
- type: recall_at_5
value: 22.536
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 84.29499999999999
- type: f1
value: 79.76188258172078
- type: f1_weighted
value: 84.96026012933847
- type: main_score
value: 84.29499999999999
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: main_score
value: 94.83200000000001
- type: map_at_1
value: 87.339
- type: map_at_10
value: 92.92099999999999
- type: map_at_100
value: 93.108
- type: map_at_1000
value: 93.116
- type: map_at_20
value: 93.041
- type: map_at_3
value: 92.219
- type: map_at_5
value: 92.664
- type: mrr_at_1
value: 93.99939993999399
- type: mrr_at_10
value: 96.55188137861403
- type: mrr_at_100
value: 96.5652366009286
- type: mrr_at_1000
value: 96.5652625550811
- type: mrr_at_20
value: 96.5601781754844
- type: mrr_at_3
value: 96.45714571457142
- type: mrr_at_5
value: 96.544904490449
- type: nauc_map_at_1000_diff1
value: 51.81676454961933
- type: nauc_map_at_1000_max
value: 24.904822914926118
- type: nauc_map_at_1000_std
value: -3.8110347821630404
- type: nauc_map_at_100_diff1
value: 51.77514975011158
- type: nauc_map_at_100_max
value: 24.912497341800094
- type: nauc_map_at_100_std
value: -3.76229517662447
- type: nauc_map_at_10_diff1
value: 51.29608296382479
- type: nauc_map_at_10_max
value: 24.78704970246707
- type: nauc_map_at_10_std
value: -3.723130815783328
- type: nauc_map_at_1_diff1
value: 59.90813138005125
- type: nauc_map_at_1_max
value: 24.58479295693794
- type: nauc_map_at_1_std
value: -8.056152492777027
- type: nauc_map_at_20_diff1
value: 51.428639331678326
- type: nauc_map_at_20_max
value: 24.849214517705086
- type: nauc_map_at_20_std
value: -3.685550123874596
- type: nauc_map_at_3_diff1
value: 50.94399923719279
- type: nauc_map_at_3_max
value: 24.359700180006207
- type: nauc_map_at_3_std
value: -5.407767408816422
- type: nauc_map_at_5_diff1
value: 50.767302682959546
- type: nauc_map_at_5_max
value: 24.491113461892215
- type: nauc_map_at_5_std
value: -4.058336127339082
- type: nauc_mrr_at_1000_diff1
value: 79.86042313551833
- type: nauc_mrr_at_1000_max
value: 23.20960445633933
- type: nauc_mrr_at_1000_std
value: -23.54334295120471
- type: nauc_mrr_at_100_diff1
value: 79.85991247027636
- type: nauc_mrr_at_100_max
value: 23.210085926780106
- type: nauc_mrr_at_100_std
value: -23.542508200789197
- type: nauc_mrr_at_10_diff1
value: 79.71095155563415
- type: nauc_mrr_at_10_max
value: 23.24128650883908
- type: nauc_mrr_at_10_std
value: -23.408502781834102
- type: nauc_mrr_at_1_diff1
value: 82.6349900233902
- type: nauc_mrr_at_1_max
value: 21.994548214014227
- type: nauc_mrr_at_1_std
value: -22.549769792179262
- type: nauc_mrr_at_20_diff1
value: 79.76465012873038
- type: nauc_mrr_at_20_max
value: 23.17575026523213
- type: nauc_mrr_at_20_std
value: -23.492660166315048
- type: nauc_mrr_at_3_diff1
value: 79.91074933379953
- type: nauc_mrr_at_3_max
value: 24.14246499097892
- type: nauc_mrr_at_3_std
value: -25.22601708389664
- type: nauc_mrr_at_5_diff1
value: 79.62092651565847
- type: nauc_mrr_at_5_max
value: 23.315937737034425
- type: nauc_mrr_at_5_std
value: -23.317659360058403
- type: nauc_ndcg_at_1000_diff1
value: 54.404537986779225
- type: nauc_ndcg_at_1000_max
value: 25.38408304128995
- type: nauc_ndcg_at_1000_std
value: -4.916709117696968
- type: nauc_ndcg_at_100_diff1
value: 53.2448598868241
- type: nauc_ndcg_at_100_max
value: 25.75325255295546
- type: nauc_ndcg_at_100_std
value: -3.680507005630751
- type: nauc_ndcg_at_10_diff1
value: 50.81057355170232
- type: nauc_ndcg_at_10_max
value: 25.006448273343807
- type: nauc_ndcg_at_10_std
value: -2.8979899112515577
- type: nauc_ndcg_at_1_diff1
value: 82.6349900233902
- type: nauc_ndcg_at_1_max
value: 21.994548214014227
- type: nauc_ndcg_at_1_std
value: -22.549769792179262
- type: nauc_ndcg_at_20_diff1
value: 51.205023097166304
- type: nauc_ndcg_at_20_max
value: 25.22133626556826
- type: nauc_ndcg_at_20_std
value: -2.9506328244150155
- type: nauc_ndcg_at_3_diff1
value: 51.79780256736321
- type: nauc_ndcg_at_3_max
value: 24.81137324438439
- type: nauc_ndcg_at_3_std
value: -6.881223858227807
- type: nauc_ndcg_at_5_diff1
value: 50.290038260564565
- type: nauc_ndcg_at_5_max
value: 24.57250792165796
- type: nauc_ndcg_at_5_std
value: -3.5124628344654596
- type: nauc_precision_at_1000_diff1
value: -20.215211396894333
- type: nauc_precision_at_1000_max
value: -14.165452298769171
- type: nauc_precision_at_1000_std
value: -2.0952871214470816
- type: nauc_precision_at_100_diff1
value: -22.340257474494607
- type: nauc_precision_at_100_max
value: -12.697885641360282
- type: nauc_precision_at_100_std
value: 1.0688624940286244
- type: nauc_precision_at_10_diff1
value: -24.78271817420798
- type: nauc_precision_at_10_max
value: -12.625257500222656
- type: nauc_precision_at_10_std
value: 3.223250450607087
- type: nauc_precision_at_1_diff1
value: 82.6349900233902
- type: nauc_precision_at_1_max
value: 21.994548214014227
- type: nauc_precision_at_1_std
value: -22.549769792179262
- type: nauc_precision_at_20_diff1
value: -24.375756227194177
- type: nauc_precision_at_20_max
value: -12.341015011563536
- type: nauc_precision_at_20_std
value: 2.7475274619387955
- type: nauc_precision_at_3_diff1
value: -24.8251306777365
- type: nauc_precision_at_3_max
value: -13.109579709589042
- type: nauc_precision_at_3_std
value: -1.2233442335420748
- type: nauc_precision_at_5_diff1
value: -26.955418583344894
- type: nauc_precision_at_5_max
value: -13.598630838071015
- type: nauc_precision_at_5_std
value: 2.545780631940738
- type: nauc_recall_at_1000_diff1
value: 0.2542680835344437
- type: nauc_recall_at_1000_max
value: 49.38194243035277
- type: nauc_recall_at_1000_std
value: 57.021502715846026
- type: nauc_recall_at_100_diff1
value: 5.062154815367015
- type: nauc_recall_at_100_max
value: 45.41178380188437
- type: nauc_recall_at_100_std
value: 50.78382225901813
- type: nauc_recall_at_10_diff1
value: 20.429153629007818
- type: nauc_recall_at_10_max
value: 27.516855026155508
- type: nauc_recall_at_10_std
value: 21.367491371755467
- type: nauc_recall_at_1_diff1
value: 59.90813138005125
- type: nauc_recall_at_1_max
value: 24.58479295693794
- type: nauc_recall_at_1_std
value: -8.056152492777027
- type: nauc_recall_at_20_diff1
value: 13.072430858896942
- type: nauc_recall_at_20_max
value: 29.5522659183247
- type: nauc_recall_at_20_std
value: 28.70569974090291
- type: nauc_recall_at_3_diff1
value: 30.419084482663617
- type: nauc_recall_at_3_max
value: 25.627389580252835
- type: nauc_recall_at_3_std
value: 2.5557690877637054
- type: nauc_recall_at_5_diff1
value: 22.92561435069869
- type: nauc_recall_at_5_max
value: 25.545265063475455
- type: nauc_recall_at_5_std
value: 14.736172663072786
- type: ndcg_at_1
value: 93.999
- type: ndcg_at_10
value: 94.83200000000001
- type: ndcg_at_100
value: 95.363
- type: ndcg_at_1000
value: 95.478
- type: ndcg_at_20
value: 95.077
- type: ndcg_at_3
value: 94.143
- type: ndcg_at_5
value: 94.525
- type: precision_at_1
value: 93.999
- type: precision_at_10
value: 11.029
- type: precision_at_100
value: 1.1560000000000001
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_20
value: 5.62
- type: precision_at_3
value: 35.219
- type: precision_at_5
value: 21.584
- type: recall_at_1
value: 87.339
- type: recall_at_10
value: 97.026
- type: recall_at_100
value: 98.936
- type: recall_at_1000
value: 99.599
- type: recall_at_20
value: 97.744
- type: recall_at_3
value: 95.069
- type: recall_at_5
value: 96.177
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: main_score
value: 60.480000000000004
- type: map_at_1
value: 31.529
- type: map_at_10
value: 52.081
- type: map_at_100
value: 54.342
- type: map_at_1000
value: 54.449000000000005
- type: map_at_20
value: 53.479
- type: map_at_3
value: 45.471000000000004
- type: map_at_5
value: 49.164
- type: mrr_at_1
value: 60.03086419753087
- type: mrr_at_10
value: 67.73754409171075
- type: mrr_at_100
value: 68.332432152368
- type: mrr_at_1000
value: 68.34150941774908
- type: mrr_at_20
value: 68.14780993838725
- type: mrr_at_3
value: 65.6378600823045
- type: mrr_at_5
value: 66.88014403292176
- type: nauc_map_at_1000_diff1
value: 45.36598134579052
- type: nauc_map_at_1000_max
value: 31.891451119906943
- type: nauc_map_at_1000_std
value: -15.41454384137943
- type: nauc_map_at_100_diff1
value: 45.31268291874018
- type: nauc_map_at_100_max
value: 31.811055683002092
- type: nauc_map_at_100_std
value: -15.348503855591417
- type: nauc_map_at_10_diff1
value: 45.22606983565892
- type: nauc_map_at_10_max
value: 30.46108534749699
- type: nauc_map_at_10_std
value: -16.618086029682555
- type: nauc_map_at_1_diff1
value: 49.94952823753276
- type: nauc_map_at_1_max
value: 13.770377574254548
- type: nauc_map_at_1_std
value: -14.946357968858653
- type: nauc_map_at_20_diff1
value: 45.29274207897926
- type: nauc_map_at_20_max
value: 31.27332015148257
- type: nauc_map_at_20_std
value: -15.782946115613129
- type: nauc_map_at_3_diff1
value: 47.94248233566038
- type: nauc_map_at_3_max
value: 24.022838776825456
- type: nauc_map_at_3_std
value: -17.103518542262208
- type: nauc_map_at_5_diff1
value: 45.85345590031722
- type: nauc_map_at_5_max
value: 27.78341379004547
- type: nauc_map_at_5_std
value: -17.490850791756326
- type: nauc_mrr_at_1000_diff1
value: 58.225141047822824
- type: nauc_mrr_at_1000_max
value: 43.39606904140525
- type: nauc_mrr_at_1000_std
value: -14.64093518199122
- type: nauc_mrr_at_100_diff1
value: 58.22137274179545
- type: nauc_mrr_at_100_max
value: 43.39567568136935
- type: nauc_mrr_at_100_std
value: -14.62512313985582
- type: nauc_mrr_at_10_diff1
value: 58.03217329957151
- type: nauc_mrr_at_10_max
value: 43.633561683075186
- type: nauc_mrr_at_10_std
value: -14.563703576023808
- type: nauc_mrr_at_1_diff1
value: 61.48979902647692
- type: nauc_mrr_at_1_max
value: 43.1938079066948
- type: nauc_mrr_at_1_std
value: -15.808138277440465
- type: nauc_mrr_at_20_diff1
value: 58.13185370150794
- type: nauc_mrr_at_20_max
value: 43.35607721183147
- type: nauc_mrr_at_20_std
value: -14.635812702971263
- type: nauc_mrr_at_3_diff1
value: 58.698963168321264
- type: nauc_mrr_at_3_max
value: 43.633129249785405
- type: nauc_mrr_at_3_std
value: -15.733246346983854
- type: nauc_mrr_at_5_diff1
value: 57.94156745229547
- type: nauc_mrr_at_5_max
value: 43.14152462640525
- type: nauc_mrr_at_5_std
value: -15.318685307750895
- type: nauc_ndcg_at_1000_diff1
value: 47.871896043731496
- type: nauc_ndcg_at_1000_max
value: 37.159845167533426
- type: nauc_ndcg_at_1000_std
value: -13.067288160833485
- type: nauc_ndcg_at_100_diff1
value: 47.046171407204426
- type: nauc_ndcg_at_100_max
value: 36.422514360855835
- type: nauc_ndcg_at_100_std
value: -11.636859259571441
- type: nauc_ndcg_at_10_diff1
value: 46.232628149078096
- type: nauc_ndcg_at_10_max
value: 34.82402625088358
- type: nauc_ndcg_at_10_std
value: -14.768545542980114
- type: nauc_ndcg_at_1_diff1
value: 61.48979902647692
- type: nauc_ndcg_at_1_max
value: 43.1938079066948
- type: nauc_ndcg_at_1_std
value: -15.808138277440465
- type: nauc_ndcg_at_20_diff1
value: 46.51116172390955
- type: nauc_ndcg_at_20_max
value: 35.36362650568298
- type: nauc_ndcg_at_20_std
value: -12.849406209182826
- type: nauc_ndcg_at_3_diff1
value: 47.39832263785871
- type: nauc_ndcg_at_3_max
value: 35.67466264628456
- type: nauc_ndcg_at_3_std
value: -17.257717349296943
- type: nauc_ndcg_at_5_diff1
value: 45.91049493804232
- type: nauc_ndcg_at_5_max
value: 33.8405091138445
- type: nauc_ndcg_at_5_std
value: -17.477069902735895
- type: nauc_precision_at_1000_diff1
value: -12.037873000917767
- type: nauc_precision_at_1000_max
value: 26.043220150002295
- type: nauc_precision_at_1000_std
value: 6.84910668321572
- type: nauc_precision_at_100_diff1
value: -9.383403459051864
- type: nauc_precision_at_100_max
value: 29.68713170610003
- type: nauc_precision_at_100_std
value: 10.079531587056152
- type: nauc_precision_at_10_diff1
value: 3.3433323353925135
- type: nauc_precision_at_10_max
value: 38.31790111725993
- type: nauc_precision_at_10_std
value: 0.7888123304710856
- type: nauc_precision_at_1_diff1
value: 61.48979902647692
- type: nauc_precision_at_1_max
value: 43.1938079066948
- type: nauc_precision_at_1_std
value: -15.808138277440465
- type: nauc_precision_at_20_diff1
value: -2.083500986294448
- type: nauc_precision_at_20_max
value: 35.77143835726343
- type: nauc_precision_at_20_std
value: 5.318547021874003
- type: nauc_precision_at_3_diff1
value: 23.335617788912586
- type: nauc_precision_at_3_max
value: 39.81973275320871
- type: nauc_precision_at_3_std
value: -8.442769390555561
- type: nauc_precision_at_5_diff1
value: 11.521087842589482
- type: nauc_precision_at_5_max
value: 39.527792539828255
- type: nauc_precision_at_5_std
value: -5.412729503701626
- type: nauc_recall_at_1000_diff1
value: 10.6830893047453
- type: nauc_recall_at_1000_max
value: 8.834504311238423
- type: nauc_recall_at_1000_std
value: 24.670754304859692
- type: nauc_recall_at_100_diff1
value: 20.646020385527358
- type: nauc_recall_at_100_max
value: 20.121595011523294
- type: nauc_recall_at_100_std
value: 19.42307459311791
- type: nauc_recall_at_10_diff1
value: 33.01029313733417
- type: nauc_recall_at_10_max
value: 27.948634980368702
- type: nauc_recall_at_10_std
value: -10.239767371462975
- type: nauc_recall_at_1_diff1
value: 49.94952823753276
- type: nauc_recall_at_1_max
value: 13.770377574254548
- type: nauc_recall_at_1_std
value: -14.946357968858653
- type: nauc_recall_at_20_diff1
value: 30.040111045267963
- type: nauc_recall_at_20_max
value: 25.984919302418184
- type: nauc_recall_at_20_std
value: -1.4998001817460804
- type: nauc_recall_at_3_diff1
value: 42.24410559113653
- type: nauc_recall_at_3_max
value: 20.269503583626914
- type: nauc_recall_at_3_std
value: -17.09578532600584
- type: nauc_recall_at_5_diff1
value: 36.124149735848945
- type: nauc_recall_at_5_max
value: 22.708022306002622
- type: nauc_recall_at_5_std
value: -16.966976847236193
- type: ndcg_at_1
value: 60.031
- type: ndcg_at_10
value: 60.480000000000004
- type: ndcg_at_100
value: 66.94099999999999
- type: ndcg_at_1000
value: 68.303
- type: ndcg_at_20
value: 63.536
- type: ndcg_at_3
value: 55.903999999999996
- type: ndcg_at_5
value: 57.387
- type: precision_at_1
value: 60.031
- type: precision_at_10
value: 16.682
- type: precision_at_100
value: 2.336
- type: precision_at_1000
value: 0.259
- type: precision_at_20
value: 9.66
- type: precision_at_3
value: 37.191
- type: precision_at_5
value: 27.253
- type: recall_at_1
value: 31.529
- type: recall_at_10
value: 68.035
- type: recall_at_100
value: 90.925
- type: recall_at_1000
value: 98.688
- type: recall_at_20
value: 77.453
- type: recall_at_3
value: 50.221000000000004
- type: recall_at_5
value: 58.209999999999994
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: main_score
value: 76.67399999999999
- type: map_at_1
value: 43.822
- type: map_at_10
value: 68.82000000000001
- type: map_at_100
value: 69.659
- type: map_at_1000
value: 69.714
- type: map_at_20
value: 69.305
- type: map_at_3
value: 65.517
- type: map_at_5
value: 67.633
- type: mrr_at_1
value: 87.643484132343
- type: mrr_at_10
value: 91.28134679485098
- type: mrr_at_100
value: 91.37985230614755
- type: mrr_at_1000
value: 91.38202467630681
- type: mrr_at_20
value: 91.34718855278429
- type: mrr_at_3
value: 90.75849651136599
- type: mrr_at_5
value: 91.10961062345235
- type: nauc_map_at_1000_diff1
value: 3.7670405082837477
- type: nauc_map_at_1000_max
value: 14.410594409695182
- type: nauc_map_at_1000_std
value: 7.94738583292685
- type: nauc_map_at_100_diff1
value: 3.738796209193936
- type: nauc_map_at_100_max
value: 14.408029101534694
- type: nauc_map_at_100_std
value: 7.979641077687816
- type: nauc_map_at_10_diff1
value: 3.334917978089454
- type: nauc_map_at_10_max
value: 13.975255289147748
- type: nauc_map_at_10_std
value: 7.491959628012161
- type: nauc_map_at_1_diff1
value: 75.35066482050009
- type: nauc_map_at_1_max
value: 53.573503488571475
- type: nauc_map_at_1_std
value: -6.542030594426993
- type: nauc_map_at_20_diff1
value: 3.5197129341582083
- type: nauc_map_at_20_max
value: 14.159880698006816
- type: nauc_map_at_20_std
value: 7.856574384998483
- type: nauc_map_at_3_diff1
value: 3.0992333232864064
- type: nauc_map_at_3_max
value: 12.513959281222112
- type: nauc_map_at_3_std
value: 4.352912866014865
- type: nauc_map_at_5_diff1
value: 3.0351688998572537
- type: nauc_map_at_5_max
value: 13.21599457624529
- type: nauc_map_at_5_std
value: 6.246882983214777
- type: nauc_mrr_at_1000_diff1
value: 75.23953736361132
- type: nauc_mrr_at_1000_max
value: 56.64260717262164
- type: nauc_mrr_at_1000_std
value: -4.865932053762276
- type: nauc_mrr_at_100_diff1
value: 75.24091372816497
- type: nauc_mrr_at_100_max
value: 56.64831104504846
- type: nauc_mrr_at_100_std
value: -4.850966297943324
- type: nauc_mrr_at_10_diff1
value: 75.26540178053416
- type: nauc_mrr_at_10_max
value: 56.828755673428965
- type: nauc_mrr_at_10_std
value: -4.8401126970944635
- type: nauc_mrr_at_1_diff1
value: 75.35066482050009
- type: nauc_mrr_at_1_max
value: 53.573503488571475
- type: nauc_mrr_at_1_std
value: -6.542030594426993
- type: nauc_mrr_at_20_diff1
value: 75.24453050729845
- type: nauc_mrr_at_20_max
value: 56.69220588401435
- type: nauc_mrr_at_20_std
value: -4.843700730832108
- type: nauc_mrr_at_3_diff1
value: 74.98411648336175
- type: nauc_mrr_at_3_max
value: 56.766537573537114
- type: nauc_mrr_at_3_std
value: -4.909712671649337
- type: nauc_mrr_at_5_diff1
value: 75.20599020991028
- type: nauc_mrr_at_5_max
value: 56.64236207782237
- type: nauc_mrr_at_5_std
value: -5.208907367513977
- type: nauc_ndcg_at_1000_diff1
value: 11.48307079099774
- type: nauc_ndcg_at_1000_max
value: 20.893326881675176
- type: nauc_ndcg_at_1000_std
value: 10.43489838692119
- type: nauc_ndcg_at_100_diff1
value: 10.395588735754927
- type: nauc_ndcg_at_100_max
value: 20.529573302516912
- type: nauc_ndcg_at_100_std
value: 11.252973083654268
- type: nauc_ndcg_at_10_diff1
value: 8.596739352741972
- type: nauc_ndcg_at_10_max
value: 18.475863682540673
- type: nauc_ndcg_at_10_std
value: 9.175831033463352
- type: nauc_ndcg_at_1_diff1
value: 75.35066482050009
- type: nauc_ndcg_at_1_max
value: 53.573503488571475
- type: nauc_ndcg_at_1_std
value: -6.542030594426993
- type: nauc_ndcg_at_20_diff1
value: 8.998033972471749
- type: nauc_ndcg_at_20_max
value: 18.892085875404522
- type: nauc_ndcg_at_20_std
value: 10.3241608901084
- type: nauc_ndcg_at_3_diff1
value: 8.796384949533579
- type: nauc_ndcg_at_3_max
value: 16.515261419885274
- type: nauc_ndcg_at_3_std
value: 4.081902976576701
- type: nauc_ndcg_at_5_diff1
value: 8.277259464605025
- type: nauc_ndcg_at_5_max
value: 17.163053202909527
- type: nauc_ndcg_at_5_std
value: 6.652669449704474
- type: nauc_precision_at_1000_diff1
value: -3.490556596304827
- type: nauc_precision_at_1000_max
value: 31.0473259001597
- type: nauc_precision_at_1000_std
value: 52.36921397692622
- type: nauc_precision_at_100_diff1
value: -6.420747959222489
- type: nauc_precision_at_100_max
value: 20.555887056005936
- type: nauc_precision_at_100_std
value: 36.119132870798495
- type: nauc_precision_at_10_diff1
value: -6.461726057290426
- type: nauc_precision_at_10_max
value: 12.161081825341915
- type: nauc_precision_at_10_std
value: 17.961318451839993
- type: nauc_precision_at_1_diff1
value: 75.35066482050009
- type: nauc_precision_at_1_max
value: 53.573503488571475
- type: nauc_precision_at_1_std
value: -6.542030594426993
- type: nauc_precision_at_20_diff1
value: -7.361461296416161
- type: nauc_precision_at_20_max
value: 12.663621261696733
- type: nauc_precision_at_20_std
value: 23.312476851670286
- type: nauc_precision_at_3_diff1
value: -3.299056912774522
- type: nauc_precision_at_3_max
value: 9.85602375812038
- type: nauc_precision_at_3_std
value: 6.4962782003155475
- type: nauc_precision_at_5_diff1
value: -5.3155827772027795
- type: nauc_precision_at_5_max
value: 10.32907751171833
- type: nauc_precision_at_5_std
value: 11.384098087196932
- type: nauc_recall_at_1000_diff1
value: -3.4905565963043332
- type: nauc_recall_at_1000_max
value: 31.04732590016041
- type: nauc_recall_at_1000_std
value: 52.36921397692641
- type: nauc_recall_at_100_diff1
value: -6.420747959222586
- type: nauc_recall_at_100_max
value: 20.55588705600596
- type: nauc_recall_at_100_std
value: 36.11913287079825
- type: nauc_recall_at_10_diff1
value: -6.461726057290347
- type: nauc_recall_at_10_max
value: 12.161081825342022
- type: nauc_recall_at_10_std
value: 17.96131845184002
- type: nauc_recall_at_1_diff1
value: 75.35066482050009
- type: nauc_recall_at_1_max
value: 53.573503488571475
- type: nauc_recall_at_1_std
value: -6.542030594426993
- type: nauc_recall_at_20_diff1
value: -7.361461296416054
- type: nauc_recall_at_20_max
value: 12.66362126169679
- type: nauc_recall_at_20_std
value: 23.312476851670382
- type: nauc_recall_at_3_diff1
value: -3.2990569127745886
- type: nauc_recall_at_3_max
value: 9.856023758120296
- type: nauc_recall_at_3_std
value: 6.496278200315444
- type: nauc_recall_at_5_diff1
value: -5.315582777202729
- type: nauc_recall_at_5_max
value: 10.329077511718229
- type: nauc_recall_at_5_std
value: 11.384098087196932
- type: ndcg_at_1
value: 87.643
- type: ndcg_at_10
value: 76.67399999999999
- type: ndcg_at_100
value: 79.462
- type: ndcg_at_1000
value: 80.43599999999999
- type: ndcg_at_20
value: 77.83
- type: ndcg_at_3
value: 72.256
- type: ndcg_at_5
value: 74.789
- type: precision_at_1
value: 87.643
- type: precision_at_10
value: 15.726999999999999
- type: precision_at_100
value: 1.791
- type: precision_at_1000
value: 0.192
- type: precision_at_20
value: 8.236
- type: precision_at_3
value: 45.919
- type: precision_at_5
value: 29.558
- type: recall_at_1
value: 43.822
- type: recall_at_10
value: 78.636
- type: recall_at_100
value: 89.527
- type: recall_at_1000
value: 95.868
- type: recall_at_20
value: 82.363
- type: recall_at_3
value: 68.879
- type: recall_at_5
value: 73.896
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.6608
- type: ap
value: 95.14657820401189
- type: ap_weighted
value: 95.14657820401189
- type: f1
value: 96.66029695623422
- type: f1_weighted
value: 96.66029695623423
- type: main_score
value: 96.6608
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: main_score
value: 45.217
- type: map_at_1
value: 24.728
- type: map_at_10
value: 37.933
- type: map_at_100
value: 39.074999999999996
- type: map_at_1000
value: 39.115
- type: map_at_20
value: 38.663
- type: map_at_3
value: 33.904
- type: map_at_5
value: 36.217
- type: mrr_at_1
value: 25.44412607449857
- type: mrr_at_10
value: 38.52640196479737
- type: mrr_at_100
value: 39.60462889736067
- type: mrr_at_1000
value: 39.638904296248526
- type: mrr_at_20
value: 39.2234365827559
- type: mrr_at_3
value: 34.59646609360076
- type: mrr_at_5
value: 36.8801337153773
- type: nauc_map_at_1000_diff1
value: 37.645652178132174
- type: nauc_map_at_1000_max
value: 9.953357023361367
- type: nauc_map_at_1000_std
value: -20.800238036721503
- type: nauc_map_at_100_diff1
value: 37.643073495974555
- type: nauc_map_at_100_max
value: 9.95921239641703
- type: nauc_map_at_100_std
value: -20.76517765535793
- type: nauc_map_at_10_diff1
value: 37.44380763335014
- type: nauc_map_at_10_max
value: 9.917273043055342
- type: nauc_map_at_10_std
value: -21.467951225710898
- type: nauc_map_at_1_diff1
value: 41.02118887981969
- type: nauc_map_at_1_max
value: 8.301113449711778
- type: nauc_map_at_1_std
value: -19.436814224415027
- type: nauc_map_at_20_diff1
value: 37.58156586490493
- type: nauc_map_at_20_max
value: 9.972927967610659
- type: nauc_map_at_20_std
value: -20.951374218839387
- type: nauc_map_at_3_diff1
value: 37.67246795684178
- type: nauc_map_at_3_max
value: 9.307031378909478
- type: nauc_map_at_3_std
value: -21.77026217965021
- type: nauc_map_at_5_diff1
value: 37.39086482095963
- type: nauc_map_at_5_max
value: 9.732739107368566
- type: nauc_map_at_5_std
value: -21.8424296893692
- type: nauc_mrr_at_1000_diff1
value: 37.36666719603192
- type: nauc_mrr_at_1000_max
value: 9.79040465289953
- type: nauc_mrr_at_1000_std
value: -20.590147245965568
- type: nauc_mrr_at_100_diff1
value: 37.36560296629318
- type: nauc_mrr_at_100_max
value: 9.798113710672162
- type: nauc_mrr_at_100_std
value: -20.556791838504292
- type: nauc_mrr_at_10_diff1
value: 37.19257605840734
- type: nauc_mrr_at_10_max
value: 9.749429811638063
- type: nauc_mrr_at_10_std
value: -21.206407664327276
- type: nauc_mrr_at_1_diff1
value: 40.98478651095172
- type: nauc_mrr_at_1_max
value: 8.173841799119707
- type: nauc_mrr_at_1_std
value: -19.530027987868017
- type: nauc_mrr_at_20_diff1
value: 37.29973172861245
- type: nauc_mrr_at_20_max
value: 9.815127660001345
- type: nauc_mrr_at_20_std
value: -20.700860112175928
- type: nauc_mrr_at_3_diff1
value: 37.282848009425734
- type: nauc_mrr_at_3_max
value: 9.172741713108193
- type: nauc_mrr_at_3_std
value: -21.563630513502996
- type: nauc_mrr_at_5_diff1
value: 37.08609827303586
- type: nauc_mrr_at_5_max
value: 9.604643424273284
- type: nauc_mrr_at_5_std
value: -21.580110806494094
- type: nauc_ndcg_at_1000_diff1
value: 37.086587020218545
- type: nauc_ndcg_at_1000_max
value: 10.696860688467472
- type: nauc_ndcg_at_1000_std
value: -19.50989939916873
- type: nauc_ndcg_at_100_diff1
value: 37.03794531268128
- type: nauc_ndcg_at_100_max
value: 10.940820719182339
- type: nauc_ndcg_at_100_std
value: -18.28651832370893
- type: nauc_ndcg_at_10_diff1
value: 36.21062857920633
- type: nauc_ndcg_at_10_max
value: 10.845172882571733
- type: nauc_ndcg_at_10_std
value: -21.454301679510106
- type: nauc_ndcg_at_1_diff1
value: 40.98478651095172
- type: nauc_ndcg_at_1_max
value: 8.173841799119707
- type: nauc_ndcg_at_1_std
value: -19.530027987868017
- type: nauc_ndcg_at_20_diff1
value: 36.583262733100526
- type: nauc_ndcg_at_20_max
value: 11.10492720898974
- type: nauc_ndcg_at_20_std
value: -19.41753284137609
- type: nauc_ndcg_at_3_diff1
value: 36.57271365035382
- type: nauc_ndcg_at_3_max
value: 9.56073433062999
- type: nauc_ndcg_at_3_std
value: -22.324263670932915
- type: nauc_ndcg_at_5_diff1
value: 36.09419372820154
- type: nauc_ndcg_at_5_max
value: 10.357384992631271
- type: nauc_ndcg_at_5_std
value: -22.389578276324894
- type: nauc_precision_at_1000_diff1
value: -2.7435338714030597
- type: nauc_precision_at_1000_max
value: 4.302274933383809
- type: nauc_precision_at_1000_std
value: 8.456846348638948
- type: nauc_precision_at_100_diff1
value: 15.149466332615983
- type: nauc_precision_at_100_max
value: 12.501013731673163
- type: nauc_precision_at_100_std
value: 15.909667509021785
- type: nauc_precision_at_10_diff1
value: 28.699788688314214
- type: nauc_precision_at_10_max
value: 13.024586051842347
- type: nauc_precision_at_10_std
value: -19.197658937078703
- type: nauc_precision_at_1_diff1
value: 40.98478651095172
- type: nauc_precision_at_1_max
value: 8.173841799119707
- type: nauc_precision_at_1_std
value: -19.530027987868017
- type: nauc_precision_at_20_diff1
value: 26.519292942353395
- type: nauc_precision_at_20_max
value: 14.389979272056438
- type: nauc_precision_at_20_std
value: -7.030956994938155
- type: nauc_precision_at_3_diff1
value: 32.87913492278213
- type: nauc_precision_at_3_max
value: 9.673660161387776
- type: nauc_precision_at_3_std
value: -23.905612656592172
- type: nauc_precision_at_5_diff1
value: 30.903850113238597
- type: nauc_precision_at_5_max
value: 11.482375434154898
- type: nauc_precision_at_5_std
value: -23.828657095254247
- type: nauc_recall_at_1000_diff1
value: 35.80765639589219
- type: nauc_recall_at_1000_max
value: 50.94532805969448
- type: nauc_recall_at_1000_std
value: 66.79910877083275
- type: nauc_recall_at_100_diff1
value: 34.96182828311028
- type: nauc_recall_at_100_max
value: 21.729699631790556
- type: nauc_recall_at_100_std
value: 23.509439011686474
- type: nauc_recall_at_10_diff1
value: 31.88371369567137
- type: nauc_recall_at_10_max
value: 14.425389702697073
- type: nauc_recall_at_10_std
value: -20.95578001880924
- type: nauc_recall_at_1_diff1
value: 41.02118887981969
- type: nauc_recall_at_1_max
value: 8.301113449711778
- type: nauc_recall_at_1_std
value: -19.436814224415027
- type: nauc_recall_at_20_diff1
value: 32.42718780622455
- type: nauc_recall_at_20_max
value: 16.90686126329399
- type: nauc_recall_at_20_std
value: -9.38158227016737
- type: nauc_recall_at_3_diff1
value: 33.68966646043966
- type: nauc_recall_at_3_max
value: 10.336277419708532
- type: nauc_recall_at_3_std
value: -23.80165869168538
- type: nauc_recall_at_5_diff1
value: 32.26258807452426
- type: nauc_recall_at_5_max
value: 12.303713005399935
- type: nauc_recall_at_5_std
value: -23.87721891164968
- type: ndcg_at_1
value: 25.444
- type: ndcg_at_10
value: 45.217
- type: ndcg_at_100
value: 50.575
- type: ndcg_at_1000
value: 51.519999999999996
- type: ndcg_at_20
value: 47.786
- type: ndcg_at_3
value: 37.067
- type: ndcg_at_5
value: 41.184
- type: precision_at_1
value: 25.444
- type: precision_at_10
value: 7.07
- type: precision_at_100
value: 0.9730000000000001
- type: precision_at_1000
value: 0.106
- type: precision_at_20
value: 4.072
- type: precision_at_3
value: 15.754999999999999
- type: precision_at_5
value: 11.544
- type: recall_at_1
value: 24.728
- type: recall_at_10
value: 67.607
- type: recall_at_100
value: 92.094
- type: recall_at_1000
value: 99.165
- type: recall_at_20
value: 77.529
- type: recall_at_3
value: 45.535
- type: recall_at_5
value: 55.394
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.01276789785682
- type: f1
value: 98.9288649250924
- type: f1_weighted
value: 99.01406884928141
- type: main_score
value: 99.01276789785682
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 92.78385772913816
- type: f1
value: 79.78115704297824
- type: f1_weighted
value: 93.90424147486428
- type: main_score
value: 92.78385772913816
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 4672e20407010da34463acc759c162ca9734bca6
metrics:
- type: accuracy
value: 85.83053127101546
- type: f1
value: 82.72036139888232
- type: f1_weighted
value: 85.81759723866098
- type: main_score
value: 85.83053127101546
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
metrics:
- type: accuracy
value: 90.19838601210489
- type: f1
value: 89.55260197964978
- type: f1_weighted
value: 90.11422965504119
- type: main_score
value: 90.19838601210489
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: main_score
value: 46.866746897607094
- type: v_measure
value: 46.866746897607094
- type: v_measure_std
value: 1.0966477896919726
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: main_score
value: 44.6538827415503
- type: v_measure
value: 44.6538827415503
- type: v_measure_std
value: 1.1649569936599116
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 59042f120c80e8afa9cdbb224f67076cec0fc9a7
metrics:
- type: main_score
value: 33.05449204940555
- type: map
value: 33.05449204940555
- type: mrr
value: 34.32562058439585
- type: nAUC_map_diff1
value: 11.465656013162807
- type: nAUC_map_max
value: -20.400088169502308
- type: nAUC_map_std
value: -2.638964886362445
- type: nAUC_mrr_diff1
value: 10.644290702481207
- type: nAUC_mrr_max
value: -15.304687384645769
- type: nAUC_mrr_std
value: -0.519919931348978
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: main_score
value: 41.998000000000005
- type: map_at_1
value: 6.907000000000001
- type: map_at_10
value: 16.397000000000002
- type: map_at_100
value: 21.69
- type: map_at_1000
value: 23.652
- type: map_at_20
value: 18.629
- type: map_at_3
value: 11.969000000000001
- type: map_at_5
value: 13.894
- type: mrr_at_1
value: 53.25077399380805
- type: mrr_at_10
value: 61.8561108653988
- type: mrr_at_100
value: 62.42447851935404
- type: mrr_at_1000
value: 62.459626424428095
- type: mrr_at_20
value: 62.287236389990696
- type: mrr_at_3
value: 60.42311661506711
- type: mrr_at_5
value: 61.36738906088753
- type: nauc_map_at_1000_diff1
value: 17.159461939643844
- type: nauc_map_at_1000_max
value: 32.42764938789903
- type: nauc_map_at_1000_std
value: 11.039427848422093
- type: nauc_map_at_100_diff1
value: 19.089532984187503
- type: nauc_map_at_100_max
value: 31.96721085058713
- type: nauc_map_at_100_std
value: 6.947468655726444
- type: nauc_map_at_10_diff1
value: 25.77255342629802
- type: nauc_map_at_10_max
value: 26.163590320961543
- type: nauc_map_at_10_std
value: -5.2588093720998375
- type: nauc_map_at_1_diff1
value: 46.31602607957798
- type: nauc_map_at_1_max
value: 11.807757660801942
- type: nauc_map_at_1_std
value: -13.984889089354317
- type: nauc_map_at_20_diff1
value: 22.308161130465365
- type: nauc_map_at_20_max
value: 29.070587307827722
- type: nauc_map_at_20_std
value: -1.0103056620851558
- type: nauc_map_at_3_diff1
value: 33.580827849617506
- type: nauc_map_at_3_max
value: 17.661630885799042
- type: nauc_map_at_3_std
value: -11.463282544041888
- type: nauc_map_at_5_diff1
value: 30.32603342696912
- type: nauc_map_at_5_max
value: 20.938905485667245
- type: nauc_map_at_5_std
value: -10.537086968155755
- type: nauc_mrr_at_1000_diff1
value: 24.45065397805829
- type: nauc_mrr_at_1000_max
value: 48.17519860927417
- type: nauc_mrr_at_1000_std
value: 30.350767549118903
- type: nauc_mrr_at_100_diff1
value: 24.444061606534486
- type: nauc_mrr_at_100_max
value: 48.1922894212229
- type: nauc_mrr_at_100_std
value: 30.379257816584094
- type: nauc_mrr_at_10_diff1
value: 24.25598717198779
- type: nauc_mrr_at_10_max
value: 48.10437607774264
- type: nauc_mrr_at_10_std
value: 30.090202482685996
- type: nauc_mrr_at_1_diff1
value: 26.907595285201264
- type: nauc_mrr_at_1_max
value: 44.006974050369955
- type: nauc_mrr_at_1_std
value: 26.921001962861062
- type: nauc_mrr_at_20_diff1
value: 24.462771570553738
- type: nauc_mrr_at_20_max
value: 48.264688196799746
- type: nauc_mrr_at_20_std
value: 30.498095141265914
- type: nauc_mrr_at_3_diff1
value: 24.76829388237229
- type: nauc_mrr_at_3_max
value: 48.213758704739924
- type: nauc_mrr_at_3_std
value: 30.1502853918892
- type: nauc_mrr_at_5_diff1
value: 24.476494932330247
- type: nauc_mrr_at_5_max
value: 47.977250552198804
- type: nauc_mrr_at_5_std
value: 29.65248143104835
- type: nauc_ndcg_at_1000_diff1
value: 13.055818920426246
- type: nauc_ndcg_at_1000_max
value: 46.00986444256306
- type: nauc_ndcg_at_1000_std
value: 29.622662054922085
- type: nauc_ndcg_at_100_diff1
value: 12.260551238228816
- type: nauc_ndcg_at_100_max
value: 39.89783048267698
- type: nauc_ndcg_at_100_std
value: 23.806961617956613
- type: nauc_ndcg_at_10_diff1
value: 11.002915931619567
- type: nauc_ndcg_at_10_max
value: 39.79323759244374
- type: nauc_ndcg_at_10_std
value: 23.053072152911046
- type: nauc_ndcg_at_1_diff1
value: 27.560910719974434
- type: nauc_ndcg_at_1_max
value: 41.21084046258119
- type: nauc_ndcg_at_1_std
value: 26.112891742912893
- type: nauc_ndcg_at_20_diff1
value: 10.085854089024496
- type: nauc_ndcg_at_20_max
value: 37.88629173784684
- type: nauc_ndcg_at_20_std
value: 23.17664322248358
- type: nauc_ndcg_at_3_diff1
value: 16.58969583405987
- type: nauc_ndcg_at_3_max
value: 41.282222954101435
- type: nauc_ndcg_at_3_std
value: 21.080670648392747
- type: nauc_ndcg_at_5_diff1
value: 13.893127947909885
- type: nauc_ndcg_at_5_max
value: 40.21188015992804
- type: nauc_ndcg_at_5_std
value: 21.417443978842652
- type: nauc_precision_at_1000_diff1
value: -17.227504530334564
- type: nauc_precision_at_1000_max
value: 3.798554468439066
- type: nauc_precision_at_1000_std
value: 35.73617809452683
- type: nauc_precision_at_100_diff1
value: -17.63388230218776
- type: nauc_precision_at_100_max
value: 15.079399882407094
- type: nauc_precision_at_100_std
value: 41.83698491321226
- type: nauc_precision_at_10_diff1
value: -11.850925959645156
- type: nauc_precision_at_10_max
value: 35.93283968364352
- type: nauc_precision_at_10_std
value: 34.391271855921296
- type: nauc_precision_at_1_diff1
value: 27.730860778824823
- type: nauc_precision_at_1_max
value: 43.97462471516834
- type: nauc_precision_at_1_std
value: 27.491068270978896
- type: nauc_precision_at_20_diff1
value: -14.281328840943347
- type: nauc_precision_at_20_max
value: 29.469099781759006
- type: nauc_precision_at_20_std
value: 38.54703022340941
- type: nauc_precision_at_3_diff1
value: 3.486986910413196
- type: nauc_precision_at_3_max
value: 41.21107780473768
- type: nauc_precision_at_3_std
value: 24.057479124531216
- type: nauc_precision_at_5_diff1
value: -3.0623787872866233
- type: nauc_precision_at_5_max
value: 37.49266386466702
- type: nauc_precision_at_5_std
value: 26.894454268004935
- type: nauc_recall_at_1000_diff1
value: -2.446891864334283
- type: nauc_recall_at_1000_max
value: 23.867293584643377
- type: nauc_recall_at_1000_std
value: 16.34707128224595
- type: nauc_recall_at_100_diff1
value: 4.891133690841179
- type: nauc_recall_at_100_max
value: 24.56727964996522
- type: nauc_recall_at_100_std
value: 9.847212953200797
- type: nauc_recall_at_10_diff1
value: 19.211912363585288
- type: nauc_recall_at_10_max
value: 24.825344777920737
- type: nauc_recall_at_10_std
value: -5.447989195041898
- type: nauc_recall_at_1_diff1
value: 46.31602607957798
- type: nauc_recall_at_1_max
value: 11.807757660801942
- type: nauc_recall_at_1_std
value: -13.984889089354317
- type: nauc_recall_at_20_diff1
value: 12.233372054304805
- type: nauc_recall_at_20_max
value: 22.284108685207148
- type: nauc_recall_at_20_std
value: -4.317138366746209
- type: nauc_recall_at_3_diff1
value: 28.394631527225815
- type: nauc_recall_at_3_max
value: 15.593864852625462
- type: nauc_recall_at_3_std
value: -12.383531804314593
- type: nauc_recall_at_5_diff1
value: 24.457441304950343
- type: nauc_recall_at_5_max
value: 19.080049396281623
- type: nauc_recall_at_5_std
value: -11.879747703626627
- type: ndcg_at_1
value: 51.548
- type: ndcg_at_10
value: 41.998000000000005
- type: ndcg_at_100
value: 39.626
- type: ndcg_at_1000
value: 48.707
- type: ndcg_at_20
value: 40.181
- type: ndcg_at_3
value: 48.06
- type: ndcg_at_5
value: 45.829
- type: precision_at_1
value: 52.941
- type: precision_at_10
value: 31.330999999999996
- type: precision_at_100
value: 10.421
- type: precision_at_1000
value: 2.428
- type: precision_at_20
value: 24.118000000000002
- type: precision_at_3
value: 45.408
- type: precision_at_5
value: 39.938
- type: recall_at_1
value: 6.907000000000001
- type: recall_at_10
value: 20.51
- type: recall_at_100
value: 40.857
- type: recall_at_1000
value: 73.616
- type: recall_at_20
value: 26.52
- type: recall_at_3
value: 13.267999999999999
- type: recall_at_5
value: 16.141
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: main_score
value: 71.8
- type: map_at_1
value: 47.629
- type: map_at_10
value: 64.846
- type: map_at_100
value: 65.40899999999999
- type: map_at_1000
value: 65.416
- type: map_at_20
value: 65.239
- type: map_at_3
value: 61.185
- type: map_at_5
value: 63.583
- type: mrr_at_1
value: 53.15758980301275
- type: mrr_at_10
value: 67.12880961577366
- type: mrr_at_100
value: 67.44006405426018
- type: mrr_at_1000
value: 67.44519150402294
- type: mrr_at_20
value: 67.34317135515428
- type: mrr_at_3
value: 64.5905755117805
- type: mrr_at_5
value: 66.24613750482806
- type: nauc_map_at_1000_diff1
value: 45.73812106517133
- type: nauc_map_at_1000_max
value: 35.21262031755756
- type: nauc_map_at_1000_std
value: -5.549443574026027
- type: nauc_map_at_100_diff1
value: 45.74254652176879
- type: nauc_map_at_100_max
value: 35.22349167515518
- type: nauc_map_at_100_std
value: -5.53697496044773
- type: nauc_map_at_10_diff1
value: 45.62837128377087
- type: nauc_map_at_10_max
value: 35.3261562342222
- type: nauc_map_at_10_std
value: -5.761924414031163
- type: nauc_map_at_1_diff1
value: 48.69187848570499
- type: nauc_map_at_1_max
value: 28.687996096473476
- type: nauc_map_at_1_std
value: -7.518605958272523
- type: nauc_map_at_20_diff1
value: 45.702303442220035
- type: nauc_map_at_20_max
value: 35.30719944705456
- type: nauc_map_at_20_std
value: -5.59505654742681
- type: nauc_map_at_3_diff1
value: 45.376813726832474
- type: nauc_map_at_3_max
value: 34.68452149643597
- type: nauc_map_at_3_std
value: -7.329014950379634
- type: nauc_map_at_5_diff1
value: 45.29528861989316
- type: nauc_map_at_5_max
value: 35.35741440869229
- type: nauc_map_at_5_std
value: -6.028788612259288
- type: nauc_mrr_at_1000_diff1
value: 46.11808147912517
- type: nauc_mrr_at_1000_max
value: 35.59241850411947
- type: nauc_mrr_at_1000_std
value: -3.4072428526109317
- type: nauc_mrr_at_100_diff1
value: 46.121345545514046
- type: nauc_mrr_at_100_max
value: 35.60147795073431
- type: nauc_mrr_at_100_std
value: -3.3965322447588826
- type: nauc_mrr_at_10_diff1
value: 46.0920068210502
- type: nauc_mrr_at_10_max
value: 35.79649987854354
- type: nauc_mrr_at_10_std
value: -3.339624589368137
- type: nauc_mrr_at_1_diff1
value: 49.101364605656194
- type: nauc_mrr_at_1_max
value: 31.500796071482146
- type: nauc_mrr_at_1_std
value: -4.183818500718156
- type: nauc_mrr_at_20_diff1
value: 46.088076630465594
- type: nauc_mrr_at_20_max
value: 35.682131663053205
- type: nauc_mrr_at_20_std
value: -3.35939023178519
- type: nauc_mrr_at_3_diff1
value: 45.47570812708642
- type: nauc_mrr_at_3_max
value: 35.741892517632984
- type: nauc_mrr_at_3_std
value: -4.135335963822013
- type: nauc_mrr_at_5_diff1
value: 45.78903474184014
- type: nauc_mrr_at_5_max
value: 35.91273593700205
- type: nauc_mrr_at_5_std
value: -3.467873421286869
- type: nauc_ndcg_at_1000_diff1
value: 45.5056583000012
- type: nauc_ndcg_at_1000_max
value: 36.34328379251593
- type: nauc_ndcg_at_1000_std
value: -4.0759698229323345
- type: nauc_ndcg_at_100_diff1
value: 45.61918946477166
- type: nauc_ndcg_at_100_max
value: 36.675460335836235
- type: nauc_ndcg_at_100_std
value: -3.6795334726235986
- type: nauc_ndcg_at_10_diff1
value: 45.15343994274541
- type: nauc_ndcg_at_10_max
value: 37.48139242964657
- type: nauc_ndcg_at_10_std
value: -4.287039084554882
- type: nauc_ndcg_at_1_diff1
value: 49.101364605656194
- type: nauc_ndcg_at_1_max
value: 31.500796071482146
- type: nauc_ndcg_at_1_std
value: -4.183818500718156
- type: nauc_ndcg_at_20_diff1
value: 45.310026313402375
- type: nauc_ndcg_at_20_max
value: 37.32177497902133
- type: nauc_ndcg_at_20_std
value: -3.8214360391282587
- type: nauc_ndcg_at_3_diff1
value: 44.27064370528994
- type: nauc_ndcg_at_3_max
value: 36.380294033571396
- type: nauc_ndcg_at_3_std
value: -6.844263370898355
- type: nauc_ndcg_at_5_diff1
value: 44.29933499225583
- type: nauc_ndcg_at_5_max
value: 37.46477041822136
- type: nauc_ndcg_at_5_std
value: -4.866548530467956
- type: nauc_precision_at_1000_diff1
value: -14.666553359142306
- type: nauc_precision_at_1000_max
value: -0.5599759853201481
- type: nauc_precision_at_1000_std
value: 16.8370925526591
- type: nauc_precision_at_100_diff1
value: -11.816251306246278
- type: nauc_precision_at_100_max
value: 2.969819268208207
- type: nauc_precision_at_100_std
value: 18.59422946634747
- type: nauc_precision_at_10_diff1
value: 1.2050200086029401
- type: nauc_precision_at_10_max
value: 17.59930352911209
- type: nauc_precision_at_10_std
value: 13.714495717588985
- type: nauc_precision_at_1_diff1
value: 49.101364605656194
- type: nauc_precision_at_1_max
value: 31.500796071482146
- type: nauc_precision_at_1_std
value: -4.183818500718156
- type: nauc_precision_at_20_diff1
value: -5.263476664822757
- type: nauc_precision_at_20_max
value: 11.42004823600046
- type: nauc_precision_at_20_std
value: 16.510514518664994
- type: nauc_precision_at_3_diff1
value: 20.116460379305828
- type: nauc_precision_at_3_max
value: 31.32235038301311
- type: nauc_precision_at_3_std
value: 2.7486717133871923
- type: nauc_precision_at_5_diff1
value: 9.57451645335723
- type: nauc_precision_at_5_max
value: 25.28449126580587
- type: nauc_precision_at_5_std
value: 9.955736162466767
- type: nauc_recall_at_1000_diff1
value: -21.632253065978794
- type: nauc_recall_at_1000_max
value: 70.14409090958776
- type: nauc_recall_at_1000_std
value: 65.61658090892989
- type: nauc_recall_at_100_diff1
value: 51.83161124806711
- type: nauc_recall_at_100_max
value: 77.49921361841523
- type: nauc_recall_at_100_std
value: 48.352508746719444
- type: nauc_recall_at_10_diff1
value: 39.86695231362791
- type: nauc_recall_at_10_max
value: 50.12029094799474
- type: nauc_recall_at_10_std
value: 0.1650940628131058
- type: nauc_recall_at_1_diff1
value: 48.69187848570499
- type: nauc_recall_at_1_max
value: 28.687996096473476
- type: nauc_recall_at_1_std
value: -7.518605958272523
- type: nauc_recall_at_20_diff1
value: 39.14155398061627
- type: nauc_recall_at_20_max
value: 56.78559423716229
- type: nauc_recall_at_20_std
value: 7.9728224572344075
- type: nauc_recall_at_3_diff1
value: 38.69589523432158
- type: nauc_recall_at_3_max
value: 39.53271258375579
- type: nauc_recall_at_3_std
value: -8.646925065787512
- type: nauc_recall_at_5_diff1
value: 37.45922652959002
- type: nauc_recall_at_5_max
value: 44.4911958995867
- type: nauc_recall_at_5_std
value: -3.5659842556375594
- type: ndcg_at_1
value: 53.15800000000001
- type: ndcg_at_10
value: 71.8
- type: ndcg_at_100
value: 73.85199999999999
- type: ndcg_at_1000
value: 74.017
- type: ndcg_at_20
value: 72.933
- type: ndcg_at_3
value: 65.479
- type: ndcg_at_5
value: 69.182
- type: precision_at_1
value: 53.15800000000001
- type: precision_at_10
value: 10.805
- type: precision_at_100
value: 1.2
- type: precision_at_1000
value: 0.122
- type: precision_at_20
value: 5.694
- type: precision_at_3
value: 28.939999999999998
- type: precision_at_5
value: 19.641000000000002
- type: recall_at_1
value: 47.629
- type: recall_at_10
value: 90.204
- type: recall_at_100
value: 98.66
- type: recall_at_1000
value: 99.874
- type: recall_at_20
value: 94.24
- type: recall_at_3
value: 74.394
- type: recall_at_5
value: 82.711
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: main_score
value: 90.025
- type: map_at_1
value: 72.222
- type: map_at_10
value: 86.58500000000001
- type: map_at_100
value: 87.176
- type: map_at_1000
value: 87.188
- type: map_at_20
value: 86.97399999999999
- type: map_at_3
value: 83.736
- type: map_at_5
value: 85.554
- type: mrr_at_1
value: 83.04
- type: mrr_at_10
value: 89.05599603174585
- type: mrr_at_100
value: 89.12398891419457
- type: mrr_at_1000
value: 89.12434072241001
- type: mrr_at_20
value: 89.10416280692111
- type: mrr_at_3
value: 88.23833333333312
- type: mrr_at_5
value: 88.82233333333308
- type: nauc_map_at_1000_diff1
value: 78.29348113313218
- type: nauc_map_at_1000_max
value: 32.31386754277228
- type: nauc_map_at_1000_std
value: -50.47543661484052
- type: nauc_map_at_100_diff1
value: 78.29618548618575
- type: nauc_map_at_100_max
value: 32.301475680947846
- type: nauc_map_at_100_std
value: -50.50303428814228
- type: nauc_map_at_10_diff1
value: 78.47383776440803
- type: nauc_map_at_10_max
value: 31.839339990133563
- type: nauc_map_at_10_std
value: -52.832713555976
- type: nauc_map_at_1_diff1
value: 82.46330147467418
- type: nauc_map_at_1_max
value: 23.497664918373538
- type: nauc_map_at_1_std
value: -43.824657665520704
- type: nauc_map_at_20_diff1
value: 78.34772176474422
- type: nauc_map_at_20_max
value: 32.16495182893947
- type: nauc_map_at_20_std
value: -51.503292726558605
- type: nauc_map_at_3_diff1
value: 79.07823813069432
- type: nauc_map_at_3_max
value: 29.395911687513976
- type: nauc_map_at_3_std
value: -54.16377546873304
- type: nauc_map_at_5_diff1
value: 78.73076619520454
- type: nauc_map_at_5_max
value: 30.700453118585237
- type: nauc_map_at_5_std
value: -54.130514177664054
- type: nauc_mrr_at_1000_diff1
value: 79.04736184471865
- type: nauc_mrr_at_1000_max
value: 34.43004593837643
- type: nauc_mrr_at_1000_std
value: -46.137269068195316
- type: nauc_mrr_at_100_diff1
value: 79.04698704288086
- type: nauc_mrr_at_100_max
value: 34.4305553741175
- type: nauc_mrr_at_100_std
value: -46.13786687786434
- type: nauc_mrr_at_10_diff1
value: 79.04490677485934
- type: nauc_mrr_at_10_max
value: 34.38170181522227
- type: nauc_mrr_at_10_std
value: -46.38129875681807
- type: nauc_mrr_at_1_diff1
value: 79.87159215719124
- type: nauc_mrr_at_1_max
value: 34.05882339253136
- type: nauc_mrr_at_1_std
value: -43.56093395137571
- type: nauc_mrr_at_20_diff1
value: 79.04384174535653
- type: nauc_mrr_at_20_max
value: 34.442136494675005
- type: nauc_mrr_at_20_std
value: -46.205458519638654
- type: nauc_mrr_at_3_diff1
value: 78.78154519155487
- type: nauc_mrr_at_3_max
value: 34.74995000500305
- type: nauc_mrr_at_3_std
value: -46.36264203155416
- type: nauc_mrr_at_5_diff1
value: 79.02631187177
- type: nauc_mrr_at_5_max
value: 34.538698249632205
- type: nauc_mrr_at_5_std
value: -46.468881576157465
- type: nauc_ndcg_at_1000_diff1
value: 78.25260097014645
- type: nauc_ndcg_at_1000_max
value: 33.68584498704271
- type: nauc_ndcg_at_1000_std
value: -48.44716779494868
- type: nauc_ndcg_at_100_diff1
value: 78.25115412256716
- type: nauc_ndcg_at_100_max
value: 33.63652663447088
- type: nauc_ndcg_at_100_std
value: -48.489243909024715
- type: nauc_ndcg_at_10_diff1
value: 78.23875101557334
- type: nauc_ndcg_at_10_max
value: 32.65217430043823
- type: nauc_ndcg_at_10_std
value: -52.57770468845309
- type: nauc_ndcg_at_1_diff1
value: 79.87159215719124
- type: nauc_ndcg_at_1_max
value: 34.05882339253136
- type: nauc_ndcg_at_1_std
value: -43.56093395137571
- type: nauc_ndcg_at_20_diff1
value: 78.23478552311765
- type: nauc_ndcg_at_20_max
value: 33.30691737901109
- type: nauc_ndcg_at_20_std
value: -50.78412614854527
- type: nauc_ndcg_at_3_diff1
value: 77.66134485470224
- type: nauc_ndcg_at_3_max
value: 32.19504710373125
- type: nauc_ndcg_at_3_std
value: -52.01636728550155
- type: nauc_ndcg_at_5_diff1
value: 78.04734137324255
- type: nauc_ndcg_at_5_max
value: 31.94593625591248
- type: nauc_ndcg_at_5_std
value: -53.02169800690546
- type: nauc_precision_at_1000_diff1
value: -45.771948123542636
- type: nauc_precision_at_1000_max
value: -5.182406190477681
- type: nauc_precision_at_1000_std
value: 41.14460438707817
- type: nauc_precision_at_100_diff1
value: -45.64767154261461
- type: nauc_precision_at_100_max
value: -5.046308286851713
- type: nauc_precision_at_100_std
value: 41.07186716587844
- type: nauc_precision_at_10_diff1
value: -42.26779562305825
- type: nauc_precision_at_10_max
value: -1.1264852893323076
- type: nauc_precision_at_10_std
value: 27.62275729822392
- type: nauc_precision_at_1_diff1
value: 79.87159215719124
- type: nauc_precision_at_1_max
value: 34.05882339253136
- type: nauc_precision_at_1_std
value: -43.56093395137571
- type: nauc_precision_at_20_diff1
value: -44.24293221128388
- type: nauc_precision_at_20_max
value: -3.1345628837361867
- type: nauc_precision_at_20_std
value: 34.23625492740366
- type: nauc_precision_at_3_diff1
value: -24.925251389823348
- type: nauc_precision_at_3_max
value: 6.622188833369412
- type: nauc_precision_at_3_std
value: 6.424741786858512
- type: nauc_precision_at_5_diff1
value: -36.1407949990387
- type: nauc_precision_at_5_max
value: 1.7533948968374462
- type: nauc_precision_at_5_std
value: 17.914083278982634
- type: nauc_recall_at_1000_diff1
value: 52.26815466244496
- type: nauc_recall_at_1000_max
value: 69.73611104239443
- type: nauc_recall_at_1000_std
value: 73.18969965863008
- type: nauc_recall_at_100_diff1
value: 70.80557513785271
- type: nauc_recall_at_100_max
value: 33.333440086544556
- type: nauc_recall_at_100_std
value: -38.75992366905504
- type: nauc_recall_at_10_diff1
value: 74.45948457438163
- type: nauc_recall_at_10_max
value: 26.64948512428989
- type: nauc_recall_at_10_std
value: -82.90334292052363
- type: nauc_recall_at_1_diff1
value: 82.46330147467418
- type: nauc_recall_at_1_max
value: 23.497664918373538
- type: nauc_recall_at_1_std
value: -43.824657665520704
- type: nauc_recall_at_20_diff1
value: 73.80140280887753
- type: nauc_recall_at_20_max
value: 30.361616426734965
- type: nauc_recall_at_20_std
value: -81.1418804447414
- type: nauc_recall_at_3_diff1
value: 75.19854736087834
- type: nauc_recall_at_3_max
value: 26.12298005045584
- type: nauc_recall_at_3_std
value: -63.42583714745169
- type: nauc_recall_at_5_diff1
value: 74.16423451950358
- type: nauc_recall_at_5_max
value: 25.552390331018987
- type: nauc_recall_at_5_std
value: -71.15891947773912
- type: ndcg_at_1
value: 83.04
- type: ndcg_at_10
value: 90.025
- type: ndcg_at_100
value: 91.006
- type: ndcg_at_1000
value: 91.061
- type: ndcg_at_20
value: 90.556
- type: ndcg_at_3
value: 87.493
- type: ndcg_at_5
value: 88.955
- type: precision_at_1
value: 83.04
- type: precision_at_10
value: 13.667000000000002
- type: precision_at_100
value: 1.542
- type: precision_at_1000
value: 0.157
- type: precision_at_20
value: 7.221
- type: precision_at_3
value: 38.433
- type: precision_at_5
value: 25.228
- type: recall_at_1
value: 72.222
- type: recall_at_10
value: 96.604
- type: recall_at_100
value: 99.786
- type: recall_at_1000
value: 99.996
- type: recall_at_20
value: 98.253
- type: recall_at_3
value: 89.276
- type: recall_at_5
value: 93.46
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: main_score
value: 72.86492101891123
- type: v_measure
value: 72.86492101891123
- type: v_measure_std
value: 2.778711445144635
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: main_score
value: 75.27316726548479
- type: v_measure
value: 75.27316726548479
- type: v_measure_std
value: 8.87871936725338
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: main_score
value: 26.638
- type: map_at_1
value: 6.128
- type: map_at_10
value: 16.472
- type: map_at_100
value: 19.522000000000002
- type: map_at_1000
value: 19.898
- type: map_at_20
value: 18.098
- type: map_at_3
value: 11.283
- type: map_at_5
value: 13.771
- type: mrr_at_1
value: 30.2
- type: mrr_at_10
value: 42.621150793650735
- type: mrr_at_100
value: 43.740858712021954
- type: mrr_at_1000
value: 43.762699500220904
- type: mrr_at_20
value: 43.383639927753634
- type: mrr_at_3
value: 38.83333333333331
- type: mrr_at_5
value: 41.14833333333326
- type: nauc_map_at_1000_diff1
value: 13.13534664124808
- type: nauc_map_at_1000_max
value: 29.346654566149795
- type: nauc_map_at_1000_std
value: 18.08121186982413
- type: nauc_map_at_100_diff1
value: 13.098072728041538
- type: nauc_map_at_100_max
value: 29.299084480697523
- type: nauc_map_at_100_std
value: 17.961620202918464
- type: nauc_map_at_10_diff1
value: 14.001743720394682
- type: nauc_map_at_10_max
value: 28.04128290996403
- type: nauc_map_at_10_std
value: 13.744481555974716
- type: nauc_map_at_1_diff1
value: 22.1926640424872
- type: nauc_map_at_1_max
value: 21.32609279586034
- type: nauc_map_at_1_std
value: 6.566596302915438
- type: nauc_map_at_20_diff1
value: 13.57313142419664
- type: nauc_map_at_20_max
value: 28.93840146319476
- type: nauc_map_at_20_std
value: 16.50869367365676
- type: nauc_map_at_3_diff1
value: 17.707700541948462
- type: nauc_map_at_3_max
value: 26.058174051376238
- type: nauc_map_at_3_std
value: 9.943924560735267
- type: nauc_map_at_5_diff1
value: 17.11844492157723
- type: nauc_map_at_5_max
value: 27.865247403049388
- type: nauc_map_at_5_std
value: 11.372588172121546
- type: nauc_mrr_at_1000_diff1
value: 21.11248719936198
- type: nauc_mrr_at_1000_max
value: 26.734172102201466
- type: nauc_mrr_at_1000_std
value: 11.766121765437228
- type: nauc_mrr_at_100_diff1
value: 21.107109982277702
- type: nauc_mrr_at_100_max
value: 26.741616065723267
- type: nauc_mrr_at_100_std
value: 11.789802686224208
- type: nauc_mrr_at_10_diff1
value: 20.74108639793207
- type: nauc_mrr_at_10_max
value: 26.920838463358333
- type: nauc_mrr_at_10_std
value: 11.849217361926522
- type: nauc_mrr_at_1_diff1
value: 22.177437860573356
- type: nauc_mrr_at_1_max
value: 21.88074521417754
- type: nauc_mrr_at_1_std
value: 6.776011900101789
- type: nauc_mrr_at_20_diff1
value: 21.126633710175994
- type: nauc_mrr_at_20_max
value: 26.860736480370974
- type: nauc_mrr_at_20_std
value: 11.815411633726338
- type: nauc_mrr_at_3_diff1
value: 21.689245200066466
- type: nauc_mrr_at_3_max
value: 26.187305092831625
- type: nauc_mrr_at_3_std
value: 10.895380313134332
- type: nauc_mrr_at_5_diff1
value: 20.898811082479778
- type: nauc_mrr_at_5_max
value: 26.939217247104036
- type: nauc_mrr_at_5_std
value: 11.77832949822472
- type: nauc_ndcg_at_1000_diff1
value: 13.251184947898546
- type: nauc_ndcg_at_1000_max
value: 30.879594164526146
- type: nauc_ndcg_at_1000_std
value: 23.125206047366625
- type: nauc_ndcg_at_100_diff1
value: 12.549100649053676
- type: nauc_ndcg_at_100_max
value: 30.634680845419123
- type: nauc_ndcg_at_100_std
value: 23.296226055422984
- type: nauc_ndcg_at_10_diff1
value: 14.475144549294322
- type: nauc_ndcg_at_10_max
value: 29.450349815417336
- type: nauc_ndcg_at_10_std
value: 15.94068314781612
- type: nauc_ndcg_at_1_diff1
value: 22.177437860573356
- type: nauc_ndcg_at_1_max
value: 21.88074521417754
- type: nauc_ndcg_at_1_std
value: 6.776011900101789
- type: nauc_ndcg_at_20_diff1
value: 14.173669585802266
- type: nauc_ndcg_at_20_max
value: 30.475890854725
- type: nauc_ndcg_at_20_std
value: 19.863898148221704
- type: nauc_ndcg_at_3_diff1
value: 18.93971261196868
- type: nauc_ndcg_at_3_max
value: 27.3707298720736
- type: nauc_ndcg_at_3_std
value: 11.439810510051224
- type: nauc_ndcg_at_5_diff1
value: 17.89535958094687
- type: nauc_ndcg_at_5_max
value: 29.272740466638425
- type: nauc_ndcg_at_5_std
value: 13.402467626635909
- type: nauc_precision_at_1000_diff1
value: -3.811547048784123
- type: nauc_precision_at_1000_max
value: 22.55165337197117
- type: nauc_precision_at_1000_std
value: 35.98524999650108
- type: nauc_precision_at_100_diff1
value: 0.6474234774922896
- type: nauc_precision_at_100_max
value: 25.06920726527032
- type: nauc_precision_at_100_std
value: 32.31439698982313
- type: nauc_precision_at_10_diff1
value: 7.943127218139508
- type: nauc_precision_at_10_max
value: 28.571937636787197
- type: nauc_precision_at_10_std
value: 18.8472620918488
- type: nauc_precision_at_1_diff1
value: 22.177437860573356
- type: nauc_precision_at_1_max
value: 21.88074521417754
- type: nauc_precision_at_1_std
value: 6.776011900101789
- type: nauc_precision_at_20_diff1
value: 6.981574259607366
- type: nauc_precision_at_20_max
value: 28.986094397038727
- type: nauc_precision_at_20_std
value: 25.83129974001146
- type: nauc_precision_at_3_diff1
value: 17.197490724039355
- type: nauc_precision_at_3_max
value: 29.17569320583099
- type: nauc_precision_at_3_std
value: 13.430554945991846
- type: nauc_precision_at_5_diff1
value: 14.952364330739362
- type: nauc_precision_at_5_max
value: 31.053243354846977
- type: nauc_precision_at_5_std
value: 15.856312752807822
- type: nauc_recall_at_1000_diff1
value: -4.8224253128926975
- type: nauc_recall_at_1000_max
value: 21.3989024429911
- type: nauc_recall_at_1000_std
value: 39.152234275603604
- type: nauc_recall_at_100_diff1
value: 0.11936808422867201
- type: nauc_recall_at_100_max
value: 24.261739241957823
- type: nauc_recall_at_100_std
value: 32.62984573938928
- type: nauc_recall_at_10_diff1
value: 7.851256165018388
- type: nauc_recall_at_10_max
value: 27.936406600938746
- type: nauc_recall_at_10_std
value: 18.683634320636113
- type: nauc_recall_at_1_diff1
value: 22.1926640424872
- type: nauc_recall_at_1_max
value: 21.32609279586034
- type: nauc_recall_at_1_std
value: 6.566596302915438
- type: nauc_recall_at_20_diff1
value: 6.8107211705182165
- type: nauc_recall_at_20_max
value: 28.286284094687787
- type: nauc_recall_at_20_std
value: 25.932013268120862
- type: nauc_recall_at_3_diff1
value: 17.04156818427151
- type: nauc_recall_at_3_max
value: 28.645439108719216
- type: nauc_recall_at_3_std
value: 13.346047828494411
- type: nauc_recall_at_5_diff1
value: 14.906284329771822
- type: nauc_recall_at_5_max
value: 30.58628602415921
- type: nauc_recall_at_5_std
value: 15.755157478191755
- type: ndcg_at_1
value: 30.2
- type: ndcg_at_10
value: 26.638
- type: ndcg_at_100
value: 37.135
- type: ndcg_at_1000
value: 42.576
- type: ndcg_at_20
value: 30.75
- type: ndcg_at_3
value: 24.675
- type: ndcg_at_5
value: 21.836
- type: precision_at_1
value: 30.2
- type: precision_at_10
value: 14.06
- type: precision_at_100
value: 2.904
- type: precision_at_1000
value: 0.42
- type: precision_at_20
value: 9.4
- type: precision_at_3
value: 23.233
- type: precision_at_5
value: 19.439999999999998
- type: recall_at_1
value: 6.128
- type: recall_at_10
value: 28.471999999999998
- type: recall_at_100
value: 58.952000000000005
- type: recall_at_1000
value: 85.137
- type: recall_at_20
value: 38.17
- type: recall_at_3
value: 14.127999999999998
- type: recall_at_5
value: 19.673
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cosine_pearson
value: 86.86608529160739
- type: cosine_spearman
value: 82.88625166203383
- type: euclidean_pearson
value: 84.15494418856142
- type: euclidean_spearman
value: 82.88449294676421
- type: main_score
value: 82.88625166203383
- type: manhattan_pearson
value: 84.39068623474428
- type: manhattan_spearman
value: 82.88065412169463
- type: pearson
value: 86.86608529160739
- type: spearman
value: 82.88625166203383
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cosine_pearson
value: 87.0445014940449
- type: cosine_spearman
value: 80.0880365116599
- type: euclidean_pearson
value: 83.80250772928852
- type: euclidean_spearman
value: 80.0892465260778
- type: main_score
value: 80.0880365116599
- type: manhattan_pearson
value: 83.96793981929336
- type: manhattan_spearman
value: 80.24881789268238
- type: pearson
value: 87.0445014940449
- type: spearman
value: 80.0880365116599
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cosine_pearson
value: 89.33900828959968
- type: cosine_spearman
value: 89.68256358526733
- type: euclidean_pearson
value: 89.29188708262265
- type: euclidean_spearman
value: 89.68204344658601
- type: main_score
value: 89.68256358526733
- type: manhattan_pearson
value: 89.13996588193149
- type: manhattan_spearman
value: 89.61372804425623
- type: pearson
value: 89.33900828959968
- type: spearman
value: 89.68256358526733
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cosine_pearson
value: 86.42029843639123
- type: cosine_spearman
value: 85.0707889220723
- type: euclidean_pearson
value: 85.75114239552562
- type: euclidean_spearman
value: 85.06858160270725
- type: main_score
value: 85.0707889220723
- type: manhattan_pearson
value: 85.86461900459038
- type: manhattan_spearman
value: 85.28671103475605
- type: pearson
value: 86.42029843639123
- type: spearman
value: 85.0707889220723
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cosine_pearson
value: 88.3660081271444
- type: cosine_spearman
value: 89.39375083609528
- type: euclidean_pearson
value: 89.21818482894895
- type: euclidean_spearman
value: 89.39361588875443
- type: main_score
value: 89.39375083609528
- type: manhattan_pearson
value: 89.53535068014057
- type: manhattan_spearman
value: 89.81077130567752
- type: pearson
value: 88.3660081271444
- type: spearman
value: 89.39375083609528
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cosine_pearson
value: 85.60708247171874
- type: cosine_spearman
value: 87.15234952832193
- type: euclidean_pearson
value: 86.21743555548137
- type: euclidean_spearman
value: 87.14450217418016
- type: main_score
value: 87.15234952832193
- type: manhattan_pearson
value: 86.2467748746084
- type: manhattan_spearman
value: 87.2197479717654
- type: pearson
value: 85.60708247171874
- type: spearman
value: 87.15234952832193
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: faeb762787bd10488a50c8b5be4a3b82e411949c
metrics:
- type: cosine_pearson
value: 91.25898556808458
- type: cosine_spearman
value: 91.35372390581641
- type: euclidean_pearson
value: 91.319520321348
- type: euclidean_spearman
value: 91.30821135416925
- type: main_score
value: 91.35372390581641
- type: manhattan_pearson
value: 91.14800959939069
- type: manhattan_spearman
value: 91.09775424245629
- type: pearson
value: 91.25898556808458
- type: spearman
value: 91.35372390581641
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
metrics:
- type: cosine_pearson
value: 67.61637111515797
- type: cosine_spearman
value: 68.10379096526697
- type: euclidean_pearson
value: 69.2652309491375
- type: euclidean_spearman
value: 68.18436357033228
- type: main_score
value: 68.10379096526697
- type: manhattan_pearson
value: 69.52531340510775
- type: manhattan_spearman
value: 68.17874790391862
- type: pearson
value: 67.61637111515797
- type: spearman
value: 68.10379096526697
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cosine_pearson
value: 87.81592853782297
- type: cosine_spearman
value: 88.2302550329183
- type: euclidean_pearson
value: 88.01165144519526
- type: euclidean_spearman
value: 88.23342148890097
- type: main_score
value: 88.2302550329183
- type: manhattan_pearson
value: 88.148592564938
- type: manhattan_spearman
value: 88.49226317320988
- type: pearson
value: 87.81592853782297
- type: spearman
value: 88.2302550329183
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: main_score
value: 89.196009707431
- type: map
value: 89.196009707431
- type: mrr
value: 97.07198121413808
- type: nAUC_map_diff1
value: -14.066667940115352
- type: nAUC_map_max
value: 49.73702475027407
- type: nAUC_map_std
value: 64.0986775782592
- type: nAUC_mrr_diff1
value: 21.96846389417319
- type: nAUC_mrr_max
value: 86.38341077184032
- type: nAUC_mrr_std
value: 75.38945014727746
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: main_score
value: 80.08999999999999
- type: map_at_1
value: 63.161
- type: map_at_10
value: 75.163
- type: map_at_100
value: 75.408
- type: map_at_1000
value: 75.409
- type: map_at_20
value: 75.332
- type: map_at_3
value: 71.839
- type: map_at_5
value: 74.32600000000001
- type: mrr_at_1
value: 66.33333333333333
- type: mrr_at_10
value: 75.95978835978836
- type: mrr_at_100
value: 76.15647881281473
- type: mrr_at_1000
value: 76.15736533763744
- type: mrr_at_20
value: 76.08557368557368
- type: mrr_at_3
value: 73.55555555555556
- type: mrr_at_5
value: 75.4888888888889
- type: nauc_map_at_1000_diff1
value: 77.31229383811176
- type: nauc_map_at_1000_max
value: 58.848319058605156
- type: nauc_map_at_1000_std
value: -14.290090263454985
- type: nauc_map_at_100_diff1
value: 77.31325400213969
- type: nauc_map_at_100_max
value: 58.848885054155275
- type: nauc_map_at_100_std
value: -14.285806618869273
- type: nauc_map_at_10_diff1
value: 77.1806705504232
- type: nauc_map_at_10_max
value: 59.02905805134415
- type: nauc_map_at_10_std
value: -14.132954900037467
- type: nauc_map_at_1_diff1
value: 81.03932970557837
- type: nauc_map_at_1_max
value: 49.02073230264529
- type: nauc_map_at_1_std
value: -22.977452975845512
- type: nauc_map_at_20_diff1
value: 77.22581364818562
- type: nauc_map_at_20_max
value: 58.90740400399768
- type: nauc_map_at_20_std
value: -14.245079150986745
- type: nauc_map_at_3_diff1
value: 76.99793243255563
- type: nauc_map_at_3_max
value: 54.9930733886623
- type: nauc_map_at_3_std
value: -19.297708446082407
- type: nauc_map_at_5_diff1
value: 77.1671608360295
- type: nauc_map_at_5_max
value: 57.27757489519526
- type: nauc_map_at_5_std
value: -15.446338357667708
- type: nauc_mrr_at_1000_diff1
value: 77.4806080821202
- type: nauc_mrr_at_1000_max
value: 60.9213776129792
- type: nauc_mrr_at_1000_std
value: -12.139599632228343
- type: nauc_mrr_at_100_diff1
value: 77.48158073865281
- type: nauc_mrr_at_100_max
value: 60.9218657185361
- type: nauc_mrr_at_100_std
value: -12.13532070453677
- type: nauc_mrr_at_10_diff1
value: 77.32428546014407
- type: nauc_mrr_at_10_max
value: 61.018407010343466
- type: nauc_mrr_at_10_std
value: -12.143193773309347
- type: nauc_mrr_at_1_diff1
value: 80.99806778887115
- type: nauc_mrr_at_1_max
value: 59.17855969530095
- type: nauc_mrr_at_1_std
value: -12.30545640831458
- type: nauc_mrr_at_20_diff1
value: 77.3811067653992
- type: nauc_mrr_at_20_max
value: 60.9648880366335
- type: nauc_mrr_at_20_std
value: -12.124066076541853
- type: nauc_mrr_at_3_diff1
value: 77.31304316321959
- type: nauc_mrr_at_3_max
value: 60.75536766404163
- type: nauc_mrr_at_3_std
value: -12.997876030849623
- type: nauc_mrr_at_5_diff1
value: 77.12952864141742
- type: nauc_mrr_at_5_max
value: 60.995943754968685
- type: nauc_mrr_at_5_std
value: -11.353447465605694
- type: nauc_ndcg_at_1000_diff1
value: 76.81788665683746
- type: nauc_ndcg_at_1000_max
value: 60.35947755262391
- type: nauc_ndcg_at_1000_std
value: -12.884942372460362
- type: nauc_ndcg_at_100_diff1
value: 76.87388230365198
- type: nauc_ndcg_at_100_max
value: 60.38813162962434
- type: nauc_ndcg_at_100_std
value: -12.64384717800478
- type: nauc_ndcg_at_10_diff1
value: 75.87713506026317
- type: nauc_ndcg_at_10_max
value: 61.39356554675667
- type: nauc_ndcg_at_10_std
value: -12.144227584144218
- type: nauc_ndcg_at_1_diff1
value: 80.99806778887115
- type: nauc_ndcg_at_1_max
value: 59.17855969530095
- type: nauc_ndcg_at_1_std
value: -12.30545640831458
- type: nauc_ndcg_at_20_diff1
value: 76.09913944506627
- type: nauc_ndcg_at_20_max
value: 61.01644448834147
- type: nauc_ndcg_at_20_std
value: -12.456209267623857
- type: nauc_ndcg_at_3_diff1
value: 75.52717946614608
- type: nauc_ndcg_at_3_max
value: 58.96433090721983
- type: nauc_ndcg_at_3_std
value: -15.849280494339556
- type: nauc_ndcg_at_5_diff1
value: 75.69026981016921
- type: nauc_ndcg_at_5_max
value: 58.924044405851326
- type: nauc_ndcg_at_5_std
value: -13.182728827923107
- type: nauc_precision_at_1000_diff1
value: -31.634022001609914
- type: nauc_precision_at_1000_max
value: 31.46271490784504
- type: nauc_precision_at_1000_std
value: 60.44801276891442
- type: nauc_precision_at_100_diff1
value: -29.722363469948103
- type: nauc_precision_at_100_max
value: 32.05464592020074
- type: nauc_precision_at_100_std
value: 60.832570595613554
- type: nauc_precision_at_10_diff1
value: -11.91731376599939
- type: nauc_precision_at_10_max
value: 45.43646553157129
- type: nauc_precision_at_10_std
value: 52.962408871791276
- type: nauc_precision_at_1_diff1
value: 80.99806778887115
- type: nauc_precision_at_1_max
value: 59.17855969530095
- type: nauc_precision_at_1_std
value: -12.30545640831458
- type: nauc_precision_at_20_diff1
value: -18.43293701721667
- type: nauc_precision_at_20_max
value: 39.53434874203934
- type: nauc_precision_at_20_std
value: 53.6291982468461
- type: nauc_precision_at_3_diff1
value: 30.84789043003892
- type: nauc_precision_at_3_max
value: 55.660727758110376
- type: nauc_precision_at_3_std
value: 17.87243920840355
- type: nauc_precision_at_5_diff1
value: 4.099395181445625
- type: nauc_precision_at_5_max
value: 50.346770968709386
- type: nauc_precision_at_5_std
value: 44.66722483255029
- type: nauc_recall_at_1000_diff1
value: .nan
- type: nauc_recall_at_1000_max
value: .nan
- type: nauc_recall_at_1000_std
value: .nan
- type: nauc_recall_at_100_diff1
value: 100.0
- type: nauc_recall_at_100_max
value: 72.2222222222207
- type: nauc_recall_at_100_std
value: 86.92810457516407
- type: nauc_recall_at_10_diff1
value: 62.18887555022005
- type: nauc_recall_at_10_max
value: 75.14339068960916
- type: nauc_recall_at_10_std
value: -1.4912631719357108
- type: nauc_recall_at_1_diff1
value: 81.03932970557837
- type: nauc_recall_at_1_max
value: 49.02073230264529
- type: nauc_recall_at_1_std
value: -22.977452975845512
- type: nauc_recall_at_20_diff1
value: 59.27414444038499
- type: nauc_recall_at_20_max
value: 76.32241302318047
- type: nauc_recall_at_20_std
value: -0.8322169447488666
- type: nauc_recall_at_3_diff1
value: 69.58783002593157
- type: nauc_recall_at_3_max
value: 55.89660919896563
- type: nauc_recall_at_3_std
value: -21.183005510917862
- type: nauc_recall_at_5_diff1
value: 65.53660499878802
- type: nauc_recall_at_5_max
value: 58.218018535135805
- type: nauc_recall_at_5_std
value: -8.328952210032455
- type: ndcg_at_1
value: 66.333
- type: ndcg_at_10
value: 80.08999999999999
- type: ndcg_at_100
value: 81.24900000000001
- type: ndcg_at_1000
value: 81.28800000000001
- type: ndcg_at_20
value: 80.625
- type: ndcg_at_3
value: 74.98700000000001
- type: ndcg_at_5
value: 78.553
- type: precision_at_1
value: 66.333
- type: precision_at_10
value: 10.667
- type: precision_at_100
value: 1.127
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_20
value: 5.45
- type: precision_at_3
value: 29.555999999999997
- type: precision_at_5
value: 20.133000000000003
- type: recall_at_1
value: 63.161
- type: recall_at_10
value: 94.167
- type: recall_at_100
value: 99.667
- type: recall_at_1000
value: 100.0
- type: recall_at_20
value: 96.167
- type: recall_at_3
value: 80.972
- type: recall_at_5
value: 89.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cosine_accuracy
value: 99.81881188118813
- type: cosine_accuracy_threshold
value: 85.55081486701965
- type: cosine_ap
value: 96.0359661816236
- type: cosine_f1
value: 90.6584992343032
- type: cosine_f1_threshold
value: 84.82859134674072
- type: cosine_precision
value: 92.59645464025026
- type: cosine_recall
value: 88.8
- type: dot_accuracy
value: 99.81881188118813
- type: dot_accuracy_threshold
value: 84.91908311843872
- type: dot_ap
value: 96.05740121094365
- type: dot_f1
value: 90.81885856079404
- type: dot_f1_threshold
value: 83.84919166564941
- type: dot_precision
value: 90.14778325123153
- type: dot_recall
value: 91.5
- type: euclidean_accuracy
value: 99.82079207920792
- type: euclidean_accuracy_threshold
value: 54.49706315994263
- type: euclidean_ap
value: 96.03223527068818
- type: euclidean_f1
value: 90.72270630445925
- type: euclidean_f1_threshold
value: 54.49706315994263
- type: euclidean_precision
value: 93.05993690851734
- type: euclidean_recall
value: 88.5
- type: main_score
value: 96.32671902439806
- type: manhattan_accuracy
value: 99.83267326732673
- type: manhattan_accuracy_threshold
value: 3818.192672729492
- type: manhattan_ap
value: 96.32671902439806
- type: manhattan_f1
value: 91.52032112393378
- type: manhattan_f1_threshold
value: 3818.192672729492
- type: manhattan_precision
value: 91.8429003021148
- type: manhattan_recall
value: 91.2
- type: max_ap
value: 96.32671902439806
- type: max_f1
value: 91.52032112393378
- type: max_precision
value: 93.05993690851734
- type: max_recall
value: 91.5
- type: similarity_accuracy
value: 99.81881188118813
- type: similarity_accuracy_threshold
value: 85.55081486701965
- type: similarity_ap
value: 96.0359661816236
- type: similarity_f1
value: 90.6584992343032
- type: similarity_f1_threshold
value: 84.82859134674072
- type: similarity_precision
value: 92.59645464025026
- type: similarity_recall
value: 88.8
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: main_score
value: 80.28558559137414
- type: v_measure
value: 80.28558559137414
- type: v_measure_std
value: 2.795276520287584
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: main_score
value: 49.57135582416209
- type: v_measure
value: 49.57135582416209
- type: v_measure_std
value: 1.6414135468423754
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: main_score
value: 55.253002583598644
- type: map
value: 55.253002583598644
- type: mrr
value: 56.24172396231219
- type: nAUC_map_diff1
value: 40.00053248203427
- type: nAUC_map_max
value: 10.05441740585869
- type: nAUC_map_std
value: 8.227169286387552
- type: nAUC_mrr_diff1
value: 40.250446264233744
- type: nAUC_mrr_max
value: 10.586310195339053
- type: nAUC_mrr_std
value: 8.47326494370076
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cosine_pearson
value: 31.19874648747059
- type: cosine_spearman
value: 31.493550648844863
- type: dot_pearson
value: 31.157847680289407
- type: dot_spearman
value: 31.575299712180538
- type: main_score
value: 31.493550648844863
- type: pearson
value: 31.19874648747059
- type: spearman
value: 31.493550648844863
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: main_score
value: 85.983
- type: map_at_1
value: 0.247
- type: map_at_10
value: 2.177
- type: map_at_100
value: 14.804
- type: map_at_1000
value: 37.045
- type: map_at_20
value: 4.12
- type: map_at_3
value: 0.7000000000000001
- type: map_at_5
value: 1.1320000000000001
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_20
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: nauc_map_at_1000_diff1
value: -0.9165125200337213
- type: nauc_map_at_1000_max
value: 40.260117798042764
- type: nauc_map_at_1000_std
value: 71.72789335831554
- type: nauc_map_at_100_diff1
value: 20.493827311583953
- type: nauc_map_at_100_max
value: 21.005742079276462
- type: nauc_map_at_100_std
value: 62.53815607831659
- type: nauc_map_at_10_diff1
value: 31.289297684528215
- type: nauc_map_at_10_max
value: 7.86554294370268
- type: nauc_map_at_10_std
value: 37.26191657133897
- type: nauc_map_at_1_diff1
value: 25.57568148849456
- type: nauc_map_at_1_max
value: -5.9767435623941445
- type: nauc_map_at_1_std
value: 30.849871717506755
- type: nauc_map_at_20_diff1
value: 30.896018204532087
- type: nauc_map_at_20_max
value: 8.667077299744314
- type: nauc_map_at_20_std
value: 41.512687168412924
- type: nauc_map_at_3_diff1
value: 29.44724521006598
- type: nauc_map_at_3_max
value: 1.597496889532064
- type: nauc_map_at_3_std
value: 32.25013773854697
- type: nauc_map_at_5_diff1
value: 27.387036605618825
- type: nauc_map_at_5_max
value: 5.402983746211454
- type: nauc_map_at_5_std
value: 33.940523962472184
- type: nauc_mrr_at_1000_diff1
value: -14.122315592903503
- type: nauc_mrr_at_1000_max
value: 33.84687208216605
- type: nauc_mrr_at_1000_std
value: 86.11111111111092
- type: nauc_mrr_at_100_diff1
value: -14.122315592903503
- type: nauc_mrr_at_100_max
value: 33.84687208216605
- type: nauc_mrr_at_100_std
value: 86.11111111111092
- type: nauc_mrr_at_10_diff1
value: -14.122315592903503
- type: nauc_mrr_at_10_max
value: 33.84687208216605
- type: nauc_mrr_at_10_std
value: 86.11111111111092
- type: nauc_mrr_at_1_diff1
value: -14.122315592903831
- type: nauc_mrr_at_1_max
value: 33.84687208216637
- type: nauc_mrr_at_1_std
value: 86.11111111111124
- type: nauc_mrr_at_20_diff1
value: -14.122315592903503
- type: nauc_mrr_at_20_max
value: 33.84687208216605
- type: nauc_mrr_at_20_std
value: 86.11111111111092
- type: nauc_mrr_at_3_diff1
value: -14.122315592903503
- type: nauc_mrr_at_3_max
value: 33.84687208216605
- type: nauc_mrr_at_3_std
value: 86.11111111111092
- type: nauc_mrr_at_5_diff1
value: -14.122315592903503
- type: nauc_mrr_at_5_max
value: 33.84687208216605
- type: nauc_mrr_at_5_std
value: 86.11111111111092
- type: nauc_ndcg_at_1000_diff1
value: 8.745907669561928
- type: nauc_ndcg_at_1000_max
value: 45.43307237994533
- type: nauc_ndcg_at_1000_std
value: 74.93357447176336
- type: nauc_ndcg_at_100_diff1
value: -3.9719350773353765
- type: nauc_ndcg_at_100_max
value: 44.43705332397461
- type: nauc_ndcg_at_100_std
value: 61.59493812371758
- type: nauc_ndcg_at_10_diff1
value: 15.230915878367348
- type: nauc_ndcg_at_10_max
value: 48.332840970836635
- type: nauc_ndcg_at_10_std
value: 46.888785065125774
- type: nauc_ndcg_at_1_diff1
value: 13.219732337379442
- type: nauc_ndcg_at_1_max
value: 45.19919078742603
- type: nauc_ndcg_at_1_std
value: 64.68253968253977
- type: nauc_ndcg_at_20_diff1
value: 12.479648691964865
- type: nauc_ndcg_at_20_max
value: 48.76688248450331
- type: nauc_ndcg_at_20_std
value: 51.450399755887545
- type: nauc_ndcg_at_3_diff1
value: 6.165414201871464
- type: nauc_ndcg_at_3_max
value: 45.089689347691035
- type: nauc_ndcg_at_3_std
value: 41.08249161845213
- type: nauc_ndcg_at_5_diff1
value: 7.411245806844721
- type: nauc_ndcg_at_5_max
value: 47.818748093538076
- type: nauc_ndcg_at_5_std
value: 45.907685763676575
- type: nauc_precision_at_1000_diff1
value: -30.574290219847345
- type: nauc_precision_at_1000_max
value: 32.56926126118719
- type: nauc_precision_at_1000_std
value: 14.584504392628874
- type: nauc_precision_at_100_diff1
value: -10.199740234718847
- type: nauc_precision_at_100_max
value: 41.0213226769777
- type: nauc_precision_at_100_std
value: 56.975760776771324
- type: nauc_precision_at_10_diff1
value: 7.865792689701161
- type: nauc_precision_at_10_max
value: 52.00432275201737
- type: nauc_precision_at_10_std
value: 43.89512276413724
- type: nauc_precision_at_1_diff1
value: -14.122315592903831
- type: nauc_precision_at_1_max
value: 33.84687208216637
- type: nauc_precision_at_1_std
value: 86.11111111111124
- type: nauc_precision_at_20_diff1
value: 5.481424191880084
- type: nauc_precision_at_20_max
value: 46.86629331792725
- type: nauc_precision_at_20_std
value: 49.245692667517496
- type: nauc_precision_at_3_diff1
value: -5.870408807869163
- type: nauc_precision_at_3_max
value: 48.73657612128875
- type: nauc_precision_at_3_std
value: 41.15152062088262
- type: nauc_precision_at_5_diff1
value: -4.550610529125413
- type: nauc_precision_at_5_max
value: 60.390115878205386
- type: nauc_precision_at_5_std
value: 44.16494295055696
- type: nauc_recall_at_1000_diff1
value: 8.047794367079034
- type: nauc_recall_at_1000_max
value: 37.07551482870489
- type: nauc_recall_at_1000_std
value: 66.20862163364201
- type: nauc_recall_at_100_diff1
value: 25.08104923597475
- type: nauc_recall_at_100_max
value: 9.971294642165734
- type: nauc_recall_at_100_std
value: 51.737814074891254
- type: nauc_recall_at_10_diff1
value: 32.33148478369628
- type: nauc_recall_at_10_max
value: 1.3767192150014917
- type: nauc_recall_at_10_std
value: 30.801926742876308
- type: nauc_recall_at_1_diff1
value: 25.57568148849456
- type: nauc_recall_at_1_max
value: -5.9767435623941445
- type: nauc_recall_at_1_std
value: 30.849871717506755
- type: nauc_recall_at_20_diff1
value: 31.716580022934654
- type: nauc_recall_at_20_max
value: -0.1281270579464631
- type: nauc_recall_at_20_std
value: 33.76185294993676
- type: nauc_recall_at_3_diff1
value: 29.758810004388348
- type: nauc_recall_at_3_max
value: -1.9442985017191816
- type: nauc_recall_at_3_std
value: 27.45550076962206
- type: nauc_recall_at_5_diff1
value: 27.047710181576672
- type: nauc_recall_at_5_max
value: 1.5237000700880248
- type: nauc_recall_at_5_std
value: 28.235297950159698
- type: ndcg_at_1
value: 94.0
- type: ndcg_at_10
value: 85.983
- type: ndcg_at_100
value: 69.195
- type: ndcg_at_1000
value: 62.541000000000004
- type: ndcg_at_20
value: 83.405
- type: ndcg_at_3
value: 89.98899999999999
- type: ndcg_at_5
value: 87.905
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 89.4
- type: precision_at_100
value: 71.54
- type: precision_at_1000
value: 27.594
- type: precision_at_20
value: 87.2
- type: precision_at_3
value: 92.667
- type: precision_at_5
value: 90.8
- type: recall_at_1
value: 0.247
- type: recall_at_10
value: 2.315
- type: recall_at_100
value: 17.574
- type: recall_at_1000
value: 59.336999999999996
- type: recall_at_20
value: 4.491
- type: recall_at_3
value: 0.7250000000000001
- type: recall_at_5
value: 1.1820000000000002
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: main_score
value: 29.944
- type: map_at_1
value: 3.064
- type: map_at_10
value: 11.501999999999999
- type: map_at_100
value: 18.736
- type: map_at_1000
value: 20.333000000000002
- type: map_at_20
value: 14.057
- type: map_at_3
value: 6.300999999999999
- type: map_at_5
value: 8.463
- type: mrr_at_1
value: 44.89795918367347
- type: mrr_at_10
value: 58.41188856494979
- type: mrr_at_100
value: 58.93964266413245
- type: mrr_at_1000
value: 58.93964266413245
- type: mrr_at_20
value: 58.767485349118
- type: mrr_at_3
value: 54.42176870748299
- type: mrr_at_5
value: 56.666666666666664
- type: nauc_map_at_1000_diff1
value: 11.478593385608479
- type: nauc_map_at_1000_max
value: 10.309889845044324
- type: nauc_map_at_1000_std
value: 21.16721939940238
- type: nauc_map_at_100_diff1
value: 11.570438543562418
- type: nauc_map_at_100_max
value: 8.426183648064834
- type: nauc_map_at_100_std
value: 18.56231985033613
- type: nauc_map_at_10_diff1
value: 22.37735506247481
- type: nauc_map_at_10_max
value: 5.455946239060806
- type: nauc_map_at_10_std
value: -4.2848826518388154
- type: nauc_map_at_1_diff1
value: 27.853645380676824
- type: nauc_map_at_1_max
value: 7.30739948053113
- type: nauc_map_at_1_std
value: -0.2773663157814586
- type: nauc_map_at_20_diff1
value: 14.724669779924648
- type: nauc_map_at_20_max
value: 10.12882779173533
- type: nauc_map_at_20_std
value: 4.4803777672120875
- type: nauc_map_at_3_diff1
value: 31.891173385921263
- type: nauc_map_at_3_max
value: 4.889652271827218
- type: nauc_map_at_3_std
value: -9.477460238651643
- type: nauc_map_at_5_diff1
value: 31.489012040465003
- type: nauc_map_at_5_max
value: 1.7330092417337482
- type: nauc_map_at_5_std
value: -8.137018608469637
- type: nauc_mrr_at_1000_diff1
value: 24.411522237082416
- type: nauc_mrr_at_1000_max
value: 11.286971076556688
- type: nauc_mrr_at_1000_std
value: 23.443174210894043
- type: nauc_mrr_at_100_diff1
value: 24.411522237082416
- type: nauc_mrr_at_100_max
value: 11.286971076556688
- type: nauc_mrr_at_100_std
value: 23.443174210894043
- type: nauc_mrr_at_10_diff1
value: 23.948152308265186
- type: nauc_mrr_at_10_max
value: 12.22420979621155
- type: nauc_mrr_at_10_std
value: 23.557939024705544
- type: nauc_mrr_at_1_diff1
value: 17.902334894536107
- type: nauc_mrr_at_1_max
value: 17.36969662861018
- type: nauc_mrr_at_1_std
value: 19.425714969048734
- type: nauc_mrr_at_20_diff1
value: 24.635893795899797
- type: nauc_mrr_at_20_max
value: 11.330541067194913
- type: nauc_mrr_at_20_std
value: 23.74518583400233
- type: nauc_mrr_at_3_diff1
value: 25.045536328282587
- type: nauc_mrr_at_3_max
value: 7.497967004732733
- type: nauc_mrr_at_3_std
value: 24.167153007320078
- type: nauc_mrr_at_5_diff1
value: 24.328479930592454
- type: nauc_mrr_at_5_max
value: 10.037126854938336
- type: nauc_mrr_at_5_std
value: 25.236208055346136
- type: nauc_ndcg_at_1000_diff1
value: 15.555347444667389
- type: nauc_ndcg_at_1000_max
value: 13.356591700655718
- type: nauc_ndcg_at_1000_std
value: 42.42395845935052
- type: nauc_ndcg_at_100_diff1
value: 13.110526060413708
- type: nauc_ndcg_at_100_max
value: 3.140006440162515
- type: nauc_ndcg_at_100_std
value: 39.02733288398033
- type: nauc_ndcg_at_10_diff1
value: 20.68853369009725
- type: nauc_ndcg_at_10_max
value: 2.435389817058852
- type: nauc_ndcg_at_10_std
value: 10.038202768784316
- type: nauc_ndcg_at_1_diff1
value: 20.17287594582385
- type: nauc_ndcg_at_1_max
value: 12.487205168273196
- type: nauc_ndcg_at_1_std
value: 20.639827614373075
- type: nauc_ndcg_at_20_diff1
value: 16.987577348502985
- type: nauc_ndcg_at_20_max
value: 2.9978717644469266
- type: nauc_ndcg_at_20_std
value: 13.015690866750354
- type: nauc_ndcg_at_3_diff1
value: 32.392223079245575
- type: nauc_ndcg_at_3_max
value: 1.587587110582544
- type: nauc_ndcg_at_3_std
value: 12.850592473446609
- type: nauc_ndcg_at_5_diff1
value: 32.80244517369626
- type: nauc_ndcg_at_5_max
value: 5.8939933777508084
- type: nauc_ndcg_at_5_std
value: 15.779687411463414
- type: nauc_precision_at_1000_diff1
value: -14.314031720452537
- type: nauc_precision_at_1000_max
value: 32.87886666567266
- type: nauc_precision_at_1000_std
value: 21.49347046886851
- type: nauc_precision_at_100_diff1
value: -9.4034008613839
- type: nauc_precision_at_100_max
value: 16.784075123309645
- type: nauc_precision_at_100_std
value: 73.14688535393604
- type: nauc_precision_at_10_diff1
value: 6.855101404043058
- type: nauc_precision_at_10_max
value: 6.52491228645612
- type: nauc_precision_at_10_std
value: 16.104602266016744
- type: nauc_precision_at_1_diff1
value: 17.902334894536107
- type: nauc_precision_at_1_max
value: 17.36969662861018
- type: nauc_precision_at_1_std
value: 19.425714969048734
- type: nauc_precision_at_20_diff1
value: -5.337534613602212
- type: nauc_precision_at_20_max
value: 17.722925454767218
- type: nauc_precision_at_20_std
value: 34.26680462132849
- type: nauc_precision_at_3_diff1
value: 31.054623397809255
- type: nauc_precision_at_3_max
value: -0.92038600946826
- type: nauc_precision_at_3_std
value: 8.326997076862916
- type: nauc_precision_at_5_diff1
value: 29.784942296920462
- type: nauc_precision_at_5_max
value: 6.337469263434779
- type: nauc_precision_at_5_std
value: 12.789597196020974
- type: nauc_recall_at_1000_diff1
value: -3.8177981862041364
- type: nauc_recall_at_1000_max
value: 14.206064332229163
- type: nauc_recall_at_1000_std
value: 74.18853420771269
- type: nauc_recall_at_100_diff1
value: 0.7677996771461106
- type: nauc_recall_at_100_max
value: -4.139924106878441
- type: nauc_recall_at_100_std
value: 48.319930706362896
- type: nauc_recall_at_10_diff1
value: 12.038835537494322
- type: nauc_recall_at_10_max
value: -2.0498983557854418
- type: nauc_recall_at_10_std
value: -2.0339180690854493
- type: nauc_recall_at_1_diff1
value: 27.853645380676824
- type: nauc_recall_at_1_max
value: 7.30739948053113
- type: nauc_recall_at_1_std
value: -0.2773663157814586
- type: nauc_recall_at_20_diff1
value: 0.7907893667756708
- type: nauc_recall_at_20_max
value: 0.8795499810558195
- type: nauc_recall_at_20_std
value: 11.512483291688282
- type: nauc_recall_at_3_diff1
value: 33.19440392639576
- type: nauc_recall_at_3_max
value: -1.5494237697432613
- type: nauc_recall_at_3_std
value: -8.560408808376984
- type: nauc_recall_at_5_diff1
value: 27.42193873870941
- type: nauc_recall_at_5_max
value: -4.74350293281128
- type: nauc_recall_at_5_std
value: -7.618060131179654
- type: ndcg_at_1
value: 42.857
- type: ndcg_at_10
value: 29.944
- type: ndcg_at_100
value: 42.624
- type: ndcg_at_1000
value: 53.384
- type: ndcg_at_20
value: 30.135
- type: ndcg_at_3
value: 34.847
- type: ndcg_at_5
value: 32.573
- type: precision_at_1
value: 44.897999999999996
- type: precision_at_10
value: 25.306
- type: precision_at_100
value: 8.694
- type: precision_at_1000
value: 1.616
- type: precision_at_20
value: 19.082
- type: precision_at_3
value: 34.014
- type: precision_at_5
value: 31.019999999999996
- type: recall_at_1
value: 3.064
- type: recall_at_10
value: 17.849999999999998
- type: recall_at_100
value: 53.217999999999996
- type: recall_at_1000
value: 87.095
- type: recall_at_20
value: 26.111
- type: recall_at_3
value: 7.383000000000001
- type: recall_at_5
value: 11.434
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 88.759765625
- type: ap
value: 36.49152357863017
- type: ap_weighted
value: 36.49152357863017
- type: f1
value: 74.4692714448641
- type: f1_weighted
value: 90.54372649306606
- type: main_score
value: 88.759765625
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 74.8443689869836
- type: f1
value: 75.1139662898148
- type: f1_weighted
value: 74.7369003946243
- type: main_score
value: 74.8443689869836
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: main_score
value: 61.42918790942448
- type: v_measure
value: 61.42918790942448
- type: v_measure_std
value: 1.0156550098843082
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cosine_accuracy
value: 88.22197055492639
- type: cosine_accuracy_threshold
value: 83.30042362213135
- type: cosine_ap
value: 80.57754959194938
- type: cosine_f1
value: 73.70579190158894
- type: cosine_f1_threshold
value: 81.04978799819946
- type: cosine_precision
value: 71.64922770303936
- type: cosine_recall
value: 75.8839050131926
- type: dot_accuracy
value: 88.23985217857782
- type: dot_accuracy_threshold
value: 83.31039547920227
- type: dot_ap
value: 80.57533213448181
- type: dot_f1
value: 73.61309601143302
- type: dot_f1_threshold
value: 81.33968114852905
- type: dot_precision
value: 72.51087791144101
- type: dot_recall
value: 74.74934036939314
- type: euclidean_accuracy
value: 88.22197055492639
- type: euclidean_accuracy_threshold
value: 58.290231227874756
- type: euclidean_ap
value: 80.57982723880139
- type: euclidean_f1
value: 73.63426519620417
- type: euclidean_f1_threshold
value: 61.55576705932617
- type: euclidean_precision
value: 71.63173652694611
- type: euclidean_recall
value: 75.75197889182058
- type: main_score
value: 80.57982723880139
- type: manhattan_accuracy
value: 88.14448351910353
- type: manhattan_accuracy_threshold
value: 3907.2471618652344
- type: manhattan_ap
value: 80.3538079655539
- type: manhattan_f1
value: 73.40466675261054
- type: manhattan_f1_threshold
value: 4103.794097900391
- type: manhattan_precision
value: 71.76707839677337
- type: manhattan_recall
value: 75.11873350923483
- type: max_ap
value: 80.57982723880139
- type: max_f1
value: 73.70579190158894
- type: max_precision
value: 72.51087791144101
- type: max_recall
value: 75.8839050131926
- type: similarity_accuracy
value: 88.22197055492639
- type: similarity_accuracy_threshold
value: 83.30042362213135
- type: similarity_ap
value: 80.57754959194938
- type: similarity_f1
value: 73.70579190158894
- type: similarity_f1_threshold
value: 81.04978799819946
- type: similarity_precision
value: 71.64922770303936
- type: similarity_recall
value: 75.8839050131926
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cosine_accuracy
value: 89.88628866379477
- type: cosine_accuracy_threshold
value: 80.8050274848938
- type: cosine_ap
value: 87.57594591596816
- type: cosine_f1
value: 80.0812257707218
- type: cosine_f1_threshold
value: 77.990061044693
- type: cosine_precision
value: 76.93126197063205
- type: cosine_recall
value: 83.50015398829689
- type: dot_accuracy
value: 89.87852679784221
- type: dot_accuracy_threshold
value: 80.84419965744019
- type: dot_ap
value: 87.56136742222151
- type: dot_f1
value: 80.05898617511521
- type: dot_f1_threshold
value: 77.92385816574097
- type: dot_precision
value: 76.80554573106035
- type: dot_recall
value: 83.60024638127503
- type: euclidean_accuracy
value: 89.86882446540149
- type: euclidean_accuracy_threshold
value: 62.08193898200989
- type: euclidean_ap
value: 87.57517549192228
- type: euclidean_f1
value: 80.05286925872892
- type: euclidean_f1_threshold
value: 66.65036082267761
- type: euclidean_precision
value: 76.51063232507545
- type: euclidean_recall
value: 83.93902063443178
- type: main_score
value: 87.64162614197194
- type: manhattan_accuracy
value: 89.8959909962355
- type: manhattan_accuracy_threshold
value: 4176.108169555664
- type: manhattan_ap
value: 87.64162614197194
- type: manhattan_f1
value: 80.17116279069768
- type: manhattan_f1_threshold
value: 4433.153533935547
- type: manhattan_precision
value: 77.57615035644848
- type: manhattan_recall
value: 82.94579611949491
- type: max_ap
value: 87.64162614197194
- type: max_f1
value: 80.17116279069768
- type: max_precision
value: 77.57615035644848
- type: max_recall
value: 83.93902063443178
- type: similarity_accuracy
value: 89.88628866379477
- type: similarity_accuracy_threshold
value: 80.8050274848938
- type: similarity_ap
value: 87.57594591596816
- type: similarity_f1
value: 80.0812257707218
- type: similarity_f1_threshold
value: 77.990061044693
- type: similarity_precision
value: 76.93126197063205
- type: similarity_recall
value: 83.50015398829689
---
# Updates
Hi, everyone, thanks for using stella models.
After six months of work, I trained the jasper model on top of the stella model, which is a multimodal model, and it can be ranked 2 in mteb (submitted the results on 2024-12-11, which may need official review https://github.com/embeddings-benchmark/results/pull/68).
Model link: https://huggingface.co/infgrad/jasper_en_vision_language_v1
I'll focus on the technical report, training data and related code, hopefully the tricks I've used will be of some help to you guys!
This work was accomplished during my free time, it's a personal hobby. One person's time and energy is limited, and you are welcome to make any contributions!
You can also find these models on my [homepage](https://huggingface.co/infgrad).
# Introduction
The models are trained based on `Alibaba-NLP/gte-large-en-v1.5` and `Alibaba-NLP/gte-Qwen2-1.5B-instruct`. Thanks for
their contributions!
**We simplify usage of prompts, providing two prompts for most general tasks, one is for s2p, another one is for s2s.**
Prompt of s2p task(e.g. retrieve task):
```text
Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: {query}
```
Prompt of s2s task(e.g. semantic textual similarity task):
```text
Instruct: Retrieve semantically similar text.\nQuery: {query}
```
The models are finally trained by [MRL]((https://arxiv.org/abs/2205.13147)), so they have multiple dimensions: 512, 768,
1024, 2048, 4096, 6144 and 8192.
The higher the dimension, the better the performance.
**Generally speaking, 1024d is good enough.** The MTEB score of 1024d is only 0.001 lower than 8192d.
# Model directory structure
The model directory structure is very simple, it is a standard SentenceTransformer directory **with a series
of `2_Dense_{dims}`
folders**, where `dims` represents the final vector dimension.
For example, the `2_Dense_256` folder stores Linear weights that convert vector dimensions to 256 dimensions.
Please refer to the following chapters for specific instructions on how to use them.
# Usage
You can use `SentenceTransformers` or `transformers` library to encode text.
## Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
# This model supports two prompts: "s2p_query" and "s2s_query" for sentence-to-passage and sentence-to-sentence tasks, respectively.
# They are defined in `config_sentence_transformers.json`
query_prompt_name = "s2p_query"
queries = [
"What are some ways to reduce stress?",
"What are the benefits of drinking green tea?",
]
# docs do not need any prompts
docs = [
"There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
"Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
]
# !The default dimension is 1024, if you need other dimensions, please clone the model and modify `modules.json` to replace `2_Dense_1024` with another dimension, e.g. `2_Dense_256` or `2_Dense_8192` !
model = SentenceTransformer("dunzhang/stella_en_1.5B_v5", trust_remote_code=True).cuda()
query_embeddings = model.encode(queries, prompt_name=query_prompt_name)
doc_embeddings = model.encode(docs)
print(query_embeddings.shape, doc_embeddings.shape)
# (2, 1024) (2, 1024)
similarities = model.similarity(query_embeddings, doc_embeddings)
print(similarities)
# tensor([[0.8179, 0.2958],
# [0.3194, 0.7854]])
```
## Transformers
```python
import os
import torch
from transformers import AutoModel, AutoTokenizer
from sklearn.preprocessing import normalize
query_prompt = "Instruct: Given a web search query, retrieve relevant passages that answer the query.\nQuery: "
queries = [
"What are some ways to reduce stress?",
"What are the benefits of drinking green tea?",
]
queries = [query_prompt + query for query in queries]
# docs do not need any prompts
docs = [
"There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent stress from building up.",
"Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties.",
]
# The path of your model after cloning it
model_dir = "{Your MODEL_PATH}"
vector_dim = 1024
vector_linear_directory = f"2_Dense_{vector_dim}"
model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).cuda().eval()
tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True)
vector_linear = torch.nn.Linear(in_features=model.config.hidden_size, out_features=vector_dim)
vector_linear_dict = {
k.replace("linear.", ""): v for k, v in
torch.load(os.path.join(model_dir, f"{vector_linear_directory}/pytorch_model.bin")).items()
}
vector_linear.load_state_dict(vector_linear_dict)
vector_linear.cuda()
# Embed the queries
with torch.no_grad():
input_data = tokenizer(queries, padding="longest", truncation=True, max_length=512, return_tensors="pt")
input_data = {k: v.cuda() for k, v in input_data.items()}
attention_mask = input_data["attention_mask"]
last_hidden_state = model(**input_data)[0]
last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
query_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
query_vectors = normalize(vector_linear(query_vectors).cpu().numpy())
# Embed the documents
with torch.no_grad():
input_data = tokenizer(docs, padding="longest", truncation=True, max_length=512, return_tensors="pt")
input_data = {k: v.cuda() for k, v in input_data.items()}
attention_mask = input_data["attention_mask"]
last_hidden_state = model(**input_data)[0]
last_hidden = last_hidden_state.masked_fill(~attention_mask[..., None].bool(), 0.0)
docs_vectors = last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
docs_vectors = normalize(vector_linear(docs_vectors).cpu().numpy())
print(query_vectors.shape, docs_vectors.shape)
# (2, 1024) (2, 1024)
similarities = query_vectors @ docs_vectors.T
print(similarities)
# [[0.8178789 0.2958377 ]
# [0.31938642 0.7853526 ]]
```
## Infinity
Usage with [Infinity, MIT Licensed Inference Server](https://github.com/michaelfeil/infinity) and Docker.
```bash
docker run --gpus all -v $PWD/data:/app/.cache \
michaelf34/infinity:0.0.69-trt-onnx \
v2 --model-id dunzhang/stella_en_1.5B_v5 --batch-size 16 --device cuda --engine torch --port 7997
```
# FAQ
Q: The details of training?
A: The training method and datasets will be released in the future. (specific time unknown, may be provided in a paper)
Q: How to choose a suitable prompt for my own task?
A: In most cases, please use the s2p and s2s prompts. These two prompts account for the vast majority of the training
data.
Q: How to reproduce MTEB results?
A: Please use evaluation scripts in `Alibaba-NLP/gte-Qwen2-1.5B-instruct` or `intfloat/e5-mistral-7b-instruct`
Q: Why each dimension has a linear weight?
A: MRL has multiple training methods, we choose this method which has the best performance.
Q: What is the sequence length of models?
A: 512 is recommended, in our experiments, almost all models perform poorly on specialized long text retrieval datasets. Besides, the
model is trained on datasets of 512 length. This may be an optimization term.
If you have any questions, please start a discussion on community. | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Teradata/bge-base-en-v1.5 | Teradata | feature-extraction | [
"onnx",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"teradata",
"en",
"license:mit",
"model-index",
"region:us"
] | 2025-02-12T16:06:02 | 2025-03-04T09:38:36 | 32 | 0 | ---
language:
- en
license: mit
tags:
- feature-extraction
- sentence-similarity
- mteb
- onnx
- teradata
model-index:
- name: bge-base-en-v1.5
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 76.14925373134328
- type: ap
value: 39.32336517995478
- type: f1
value: 70.16902252611425
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 93.386825
- type: ap
value: 90.21276917991995
- type: f1
value: 93.37741030006174
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 48.846000000000004
- type: f1
value: 48.14646269778261
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.754000000000005
- type: map_at_10
value: 55.761
- type: map_at_100
value: 56.330999999999996
- type: map_at_1000
value: 56.333999999999996
- type: map_at_3
value: 51.92
- type: map_at_5
value: 54.010999999999996
- type: mrr_at_1
value: 41.181
- type: mrr_at_10
value: 55.967999999999996
- type: mrr_at_100
value: 56.538
- type: mrr_at_1000
value: 56.542
- type: mrr_at_3
value: 51.980000000000004
- type: mrr_at_5
value: 54.208999999999996
- type: ndcg_at_1
value: 40.754000000000005
- type: ndcg_at_10
value: 63.605000000000004
- type: ndcg_at_100
value: 66.05199999999999
- type: ndcg_at_1000
value: 66.12
- type: ndcg_at_3
value: 55.708
- type: ndcg_at_5
value: 59.452000000000005
- type: precision_at_1
value: 40.754000000000005
- type: precision_at_10
value: 8.841000000000001
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.238
- type: precision_at_5
value: 15.149000000000001
- type: recall_at_1
value: 40.754000000000005
- type: recall_at_10
value: 88.407
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 66.714
- type: recall_at_5
value: 75.747
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 48.74884539679369
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 42.8075893810716
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 62.128470519187736
- type: mrr
value: 74.28065778481289
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 89.24629081484655
- type: cos_sim_spearman
value: 86.93752309911496
- type: euclidean_pearson
value: 87.58589628573816
- type: euclidean_spearman
value: 88.05622328825284
- type: manhattan_pearson
value: 87.5594959805773
- type: manhattan_spearman
value: 88.19658793233961
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 86.9512987012987
- type: f1
value: 86.92515357973708
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 39.10263762928872
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 36.69711517426737
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.327
- type: map_at_10
value: 44.099
- type: map_at_100
value: 45.525
- type: map_at_1000
value: 45.641999999999996
- type: map_at_3
value: 40.47
- type: map_at_5
value: 42.36
- type: mrr_at_1
value: 39.199
- type: mrr_at_10
value: 49.651
- type: mrr_at_100
value: 50.29
- type: mrr_at_1000
value: 50.329
- type: mrr_at_3
value: 46.924
- type: mrr_at_5
value: 48.548
- type: ndcg_at_1
value: 39.199
- type: ndcg_at_10
value: 50.773
- type: ndcg_at_100
value: 55.67999999999999
- type: ndcg_at_1000
value: 57.495
- type: ndcg_at_3
value: 45.513999999999996
- type: ndcg_at_5
value: 47.703
- type: precision_at_1
value: 39.199
- type: precision_at_10
value: 9.914000000000001
- type: precision_at_100
value: 1.5310000000000001
- type: precision_at_1000
value: 0.198
- type: precision_at_3
value: 21.984
- type: precision_at_5
value: 15.737000000000002
- type: recall_at_1
value: 32.327
- type: recall_at_10
value: 63.743
- type: recall_at_100
value: 84.538
- type: recall_at_1000
value: 96.089
- type: recall_at_3
value: 48.065000000000005
- type: recall_at_5
value: 54.519
- type: map_at_1
value: 32.671
- type: map_at_10
value: 42.954
- type: map_at_100
value: 44.151
- type: map_at_1000
value: 44.287
- type: map_at_3
value: 39.912
- type: map_at_5
value: 41.798
- type: mrr_at_1
value: 41.465
- type: mrr_at_10
value: 49.351
- type: mrr_at_100
value: 49.980000000000004
- type: mrr_at_1000
value: 50.016000000000005
- type: mrr_at_3
value: 47.144000000000005
- type: mrr_at_5
value: 48.592999999999996
- type: ndcg_at_1
value: 41.465
- type: ndcg_at_10
value: 48.565999999999995
- type: ndcg_at_100
value: 52.76499999999999
- type: ndcg_at_1000
value: 54.749
- type: ndcg_at_3
value: 44.57
- type: ndcg_at_5
value: 46.759
- type: precision_at_1
value: 41.465
- type: precision_at_10
value: 9.107999999999999
- type: precision_at_100
value: 1.433
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.423000000000002
- type: precision_at_5
value: 15.414
- type: recall_at_1
value: 32.671
- type: recall_at_10
value: 57.738
- type: recall_at_100
value: 75.86500000000001
- type: recall_at_1000
value: 88.36
- type: recall_at_3
value: 45.626
- type: recall_at_5
value: 51.812000000000005
- type: map_at_1
value: 41.185
- type: map_at_10
value: 53.929
- type: map_at_100
value: 54.92
- type: map_at_1000
value: 54.967999999999996
- type: map_at_3
value: 50.70400000000001
- type: map_at_5
value: 52.673
- type: mrr_at_1
value: 47.398
- type: mrr_at_10
value: 57.303000000000004
- type: mrr_at_100
value: 57.959
- type: mrr_at_1000
value: 57.985
- type: mrr_at_3
value: 54.932
- type: mrr_at_5
value: 56.464999999999996
- type: ndcg_at_1
value: 47.398
- type: ndcg_at_10
value: 59.653
- type: ndcg_at_100
value: 63.627
- type: ndcg_at_1000
value: 64.596
- type: ndcg_at_3
value: 54.455
- type: ndcg_at_5
value: 57.245000000000005
- type: precision_at_1
value: 47.398
- type: precision_at_10
value: 9.524000000000001
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 24.389
- type: precision_at_5
value: 16.752
- type: recall_at_1
value: 41.185
- type: recall_at_10
value: 73.193
- type: recall_at_100
value: 90.357
- type: recall_at_1000
value: 97.253
- type: recall_at_3
value: 59.199999999999996
- type: recall_at_5
value: 66.118
- type: map_at_1
value: 27.27
- type: map_at_10
value: 36.223
- type: map_at_100
value: 37.218
- type: map_at_1000
value: 37.293
- type: map_at_3
value: 33.503
- type: map_at_5
value: 35.097
- type: mrr_at_1
value: 29.492
- type: mrr_at_10
value: 38.352000000000004
- type: mrr_at_100
value: 39.188
- type: mrr_at_1000
value: 39.247
- type: mrr_at_3
value: 35.876000000000005
- type: mrr_at_5
value: 37.401
- type: ndcg_at_1
value: 29.492
- type: ndcg_at_10
value: 41.239
- type: ndcg_at_100
value: 46.066
- type: ndcg_at_1000
value: 47.992000000000004
- type: ndcg_at_3
value: 36.11
- type: ndcg_at_5
value: 38.772
- type: precision_at_1
value: 29.492
- type: precision_at_10
value: 6.260000000000001
- type: precision_at_100
value: 0.914
- type: precision_at_1000
value: 0.11100000000000002
- type: precision_at_3
value: 15.104000000000001
- type: precision_at_5
value: 10.644
- type: recall_at_1
value: 27.27
- type: recall_at_10
value: 54.589
- type: recall_at_100
value: 76.70700000000001
- type: recall_at_1000
value: 91.158
- type: recall_at_3
value: 40.974
- type: recall_at_5
value: 47.327000000000005
- type: map_at_1
value: 17.848
- type: map_at_10
value: 26.207
- type: map_at_100
value: 27.478
- type: map_at_1000
value: 27.602
- type: map_at_3
value: 23.405
- type: map_at_5
value: 24.98
- type: mrr_at_1
value: 21.891
- type: mrr_at_10
value: 31.041999999999998
- type: mrr_at_100
value: 32.092
- type: mrr_at_1000
value: 32.151999999999994
- type: mrr_at_3
value: 28.358
- type: mrr_at_5
value: 29.969
- type: ndcg_at_1
value: 21.891
- type: ndcg_at_10
value: 31.585
- type: ndcg_at_100
value: 37.531
- type: ndcg_at_1000
value: 40.256
- type: ndcg_at_3
value: 26.508
- type: ndcg_at_5
value: 28.894
- type: precision_at_1
value: 21.891
- type: precision_at_10
value: 5.795999999999999
- type: precision_at_100
value: 0.9990000000000001
- type: precision_at_1000
value: 0.13799999999999998
- type: precision_at_3
value: 12.769
- type: precision_at_5
value: 9.279
- type: recall_at_1
value: 17.848
- type: recall_at_10
value: 43.452
- type: recall_at_100
value: 69.216
- type: recall_at_1000
value: 88.102
- type: recall_at_3
value: 29.18
- type: recall_at_5
value: 35.347
- type: map_at_1
value: 30.94
- type: map_at_10
value: 41.248000000000005
- type: map_at_100
value: 42.495
- type: map_at_1000
value: 42.602000000000004
- type: map_at_3
value: 37.939
- type: map_at_5
value: 39.924
- type: mrr_at_1
value: 37.824999999999996
- type: mrr_at_10
value: 47.041
- type: mrr_at_100
value: 47.83
- type: mrr_at_1000
value: 47.878
- type: mrr_at_3
value: 44.466
- type: mrr_at_5
value: 46.111999999999995
- type: ndcg_at_1
value: 37.824999999999996
- type: ndcg_at_10
value: 47.223
- type: ndcg_at_100
value: 52.394
- type: ndcg_at_1000
value: 54.432
- type: ndcg_at_3
value: 42.032000000000004
- type: ndcg_at_5
value: 44.772
- type: precision_at_1
value: 37.824999999999996
- type: precision_at_10
value: 8.393
- type: precision_at_100
value: 1.2890000000000001
- type: precision_at_1000
value: 0.164
- type: precision_at_3
value: 19.698
- type: precision_at_5
value: 14.013
- type: recall_at_1
value: 30.94
- type: recall_at_10
value: 59.316
- type: recall_at_100
value: 80.783
- type: recall_at_1000
value: 94.15400000000001
- type: recall_at_3
value: 44.712
- type: recall_at_5
value: 51.932
- type: map_at_1
value: 27.104
- type: map_at_10
value: 36.675999999999995
- type: map_at_100
value: 38.076
- type: map_at_1000
value: 38.189
- type: map_at_3
value: 33.733999999999995
- type: map_at_5
value: 35.287
- type: mrr_at_1
value: 33.904
- type: mrr_at_10
value: 42.55
- type: mrr_at_100
value: 43.434
- type: mrr_at_1000
value: 43.494
- type: mrr_at_3
value: 40.126
- type: mrr_at_5
value: 41.473
- type: ndcg_at_1
value: 33.904
- type: ndcg_at_10
value: 42.414
- type: ndcg_at_100
value: 48.203
- type: ndcg_at_1000
value: 50.437
- type: ndcg_at_3
value: 37.633
- type: ndcg_at_5
value: 39.67
- type: precision_at_1
value: 33.904
- type: precision_at_10
value: 7.82
- type: precision_at_100
value: 1.2409999999999999
- type: precision_at_1000
value: 0.159
- type: precision_at_3
value: 17.884
- type: precision_at_5
value: 12.648000000000001
- type: recall_at_1
value: 27.104
- type: recall_at_10
value: 53.563
- type: recall_at_100
value: 78.557
- type: recall_at_1000
value: 93.533
- type: recall_at_3
value: 39.92
- type: recall_at_5
value: 45.457
- type: map_at_1
value: 27.707749999999997
- type: map_at_10
value: 36.961
- type: map_at_100
value: 38.158833333333334
- type: map_at_1000
value: 38.270333333333326
- type: map_at_3
value: 34.07183333333334
- type: map_at_5
value: 35.69533333333334
- type: mrr_at_1
value: 32.81875
- type: mrr_at_10
value: 41.293
- type: mrr_at_100
value: 42.116499999999995
- type: mrr_at_1000
value: 42.170249999999996
- type: mrr_at_3
value: 38.83983333333333
- type: mrr_at_5
value: 40.29775
- type: ndcg_at_1
value: 32.81875
- type: ndcg_at_10
value: 42.355
- type: ndcg_at_100
value: 47.41374999999999
- type: ndcg_at_1000
value: 49.5805
- type: ndcg_at_3
value: 37.52825
- type: ndcg_at_5
value: 39.83266666666667
- type: precision_at_1
value: 32.81875
- type: precision_at_10
value: 7.382416666666666
- type: precision_at_100
value: 1.1640833333333334
- type: precision_at_1000
value: 0.15383333333333335
- type: precision_at_3
value: 17.134166666666665
- type: precision_at_5
value: 12.174833333333336
- type: recall_at_1
value: 27.707749999999997
- type: recall_at_10
value: 53.945
- type: recall_at_100
value: 76.191
- type: recall_at_1000
value: 91.101
- type: recall_at_3
value: 40.39083333333334
- type: recall_at_5
value: 46.40083333333333
- type: map_at_1
value: 26.482
- type: map_at_10
value: 33.201
- type: map_at_100
value: 34.107
- type: map_at_1000
value: 34.197
- type: map_at_3
value: 31.174000000000003
- type: map_at_5
value: 32.279
- type: mrr_at_1
value: 29.908
- type: mrr_at_10
value: 36.235
- type: mrr_at_100
value: 37.04
- type: mrr_at_1000
value: 37.105
- type: mrr_at_3
value: 34.355999999999995
- type: mrr_at_5
value: 35.382999999999996
- type: ndcg_at_1
value: 29.908
- type: ndcg_at_10
value: 37.325
- type: ndcg_at_100
value: 41.795
- type: ndcg_at_1000
value: 44.105
- type: ndcg_at_3
value: 33.555
- type: ndcg_at_5
value: 35.266999999999996
- type: precision_at_1
value: 29.908
- type: precision_at_10
value: 5.721
- type: precision_at_100
value: 0.8630000000000001
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 14.008000000000001
- type: precision_at_5
value: 9.754999999999999
- type: recall_at_1
value: 26.482
- type: recall_at_10
value: 47.072
- type: recall_at_100
value: 67.27
- type: recall_at_1000
value: 84.371
- type: recall_at_3
value: 36.65
- type: recall_at_5
value: 40.774
- type: map_at_1
value: 18.815
- type: map_at_10
value: 26.369999999999997
- type: map_at_100
value: 27.458
- type: map_at_1000
value: 27.588
- type: map_at_3
value: 23.990000000000002
- type: map_at_5
value: 25.345000000000002
- type: mrr_at_1
value: 22.953000000000003
- type: mrr_at_10
value: 30.342999999999996
- type: mrr_at_100
value: 31.241000000000003
- type: mrr_at_1000
value: 31.319000000000003
- type: mrr_at_3
value: 28.16
- type: mrr_at_5
value: 29.406
- type: ndcg_at_1
value: 22.953000000000003
- type: ndcg_at_10
value: 31.151
- type: ndcg_at_100
value: 36.309000000000005
- type: ndcg_at_1000
value: 39.227000000000004
- type: ndcg_at_3
value: 26.921
- type: ndcg_at_5
value: 28.938000000000002
- type: precision_at_1
value: 22.953000000000003
- type: precision_at_10
value: 5.602
- type: precision_at_100
value: 0.9530000000000001
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 12.606
- type: precision_at_5
value: 9.119
- type: recall_at_1
value: 18.815
- type: recall_at_10
value: 41.574
- type: recall_at_100
value: 64.84400000000001
- type: recall_at_1000
value: 85.406
- type: recall_at_3
value: 29.694
- type: recall_at_5
value: 34.935
- type: map_at_1
value: 27.840999999999998
- type: map_at_10
value: 36.797999999999995
- type: map_at_100
value: 37.993
- type: map_at_1000
value: 38.086999999999996
- type: map_at_3
value: 34.050999999999995
- type: map_at_5
value: 35.379
- type: mrr_at_1
value: 32.649
- type: mrr_at_10
value: 41.025
- type: mrr_at_100
value: 41.878
- type: mrr_at_1000
value: 41.929
- type: mrr_at_3
value: 38.573
- type: mrr_at_5
value: 39.715
- type: ndcg_at_1
value: 32.649
- type: ndcg_at_10
value: 42.142
- type: ndcg_at_100
value: 47.558
- type: ndcg_at_1000
value: 49.643
- type: ndcg_at_3
value: 37.12
- type: ndcg_at_5
value: 38.983000000000004
- type: precision_at_1
value: 32.649
- type: precision_at_10
value: 7.08
- type: precision_at_100
value: 1.1039999999999999
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 16.698
- type: precision_at_5
value: 11.511000000000001
- type: recall_at_1
value: 27.840999999999998
- type: recall_at_10
value: 54.245
- type: recall_at_100
value: 77.947
- type: recall_at_1000
value: 92.36999999999999
- type: recall_at_3
value: 40.146
- type: recall_at_5
value: 44.951
- type: map_at_1
value: 26.529000000000003
- type: map_at_10
value: 35.010000000000005
- type: map_at_100
value: 36.647
- type: map_at_1000
value: 36.857
- type: map_at_3
value: 31.968000000000004
- type: map_at_5
value: 33.554
- type: mrr_at_1
value: 31.818
- type: mrr_at_10
value: 39.550999999999995
- type: mrr_at_100
value: 40.54
- type: mrr_at_1000
value: 40.596
- type: mrr_at_3
value: 36.726
- type: mrr_at_5
value: 38.416
- type: ndcg_at_1
value: 31.818
- type: ndcg_at_10
value: 40.675
- type: ndcg_at_100
value: 46.548
- type: ndcg_at_1000
value: 49.126
- type: ndcg_at_3
value: 35.829
- type: ndcg_at_5
value: 38
- type: precision_at_1
value: 31.818
- type: precision_at_10
value: 7.826
- type: precision_at_100
value: 1.538
- type: precision_at_1000
value: 0.24
- type: precision_at_3
value: 16.601
- type: precision_at_5
value: 12.095
- type: recall_at_1
value: 26.529000000000003
- type: recall_at_10
value: 51.03
- type: recall_at_100
value: 77.556
- type: recall_at_1000
value: 93.804
- type: recall_at_3
value: 36.986000000000004
- type: recall_at_5
value: 43.096000000000004
- type: map_at_1
value: 23.480999999999998
- type: map_at_10
value: 30.817
- type: map_at_100
value: 31.838
- type: map_at_1000
value: 31.932
- type: map_at_3
value: 28.011999999999997
- type: map_at_5
value: 29.668
- type: mrr_at_1
value: 25.323
- type: mrr_at_10
value: 33.072
- type: mrr_at_100
value: 33.926
- type: mrr_at_1000
value: 33.993
- type: mrr_at_3
value: 30.436999999999998
- type: mrr_at_5
value: 32.092
- type: ndcg_at_1
value: 25.323
- type: ndcg_at_10
value: 35.514
- type: ndcg_at_100
value: 40.489000000000004
- type: ndcg_at_1000
value: 42.908
- type: ndcg_at_3
value: 30.092000000000002
- type: ndcg_at_5
value: 32.989000000000004
- type: precision_at_1
value: 25.323
- type: precision_at_10
value: 5.545
- type: precision_at_100
value: 0.861
- type: precision_at_1000
value: 0.117
- type: precision_at_3
value: 12.446
- type: precision_at_5
value: 9.131
- type: recall_at_1
value: 23.480999999999998
- type: recall_at_10
value: 47.825
- type: recall_at_100
value: 70.652
- type: recall_at_1000
value: 88.612
- type: recall_at_3
value: 33.537
- type: recall_at_5
value: 40.542
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 13.333999999999998
- type: map_at_10
value: 22.524
- type: map_at_100
value: 24.506
- type: map_at_1000
value: 24.715
- type: map_at_3
value: 19.022
- type: map_at_5
value: 20.693
- type: mrr_at_1
value: 29.186
- type: mrr_at_10
value: 41.22
- type: mrr_at_100
value: 42.16
- type: mrr_at_1000
value: 42.192
- type: mrr_at_3
value: 38.013000000000005
- type: mrr_at_5
value: 39.704
- type: ndcg_at_1
value: 29.186
- type: ndcg_at_10
value: 31.167
- type: ndcg_at_100
value: 38.879000000000005
- type: ndcg_at_1000
value: 42.376000000000005
- type: ndcg_at_3
value: 25.817
- type: ndcg_at_5
value: 27.377000000000002
- type: precision_at_1
value: 29.186
- type: precision_at_10
value: 9.693999999999999
- type: precision_at_100
value: 1.8030000000000002
- type: precision_at_1000
value: 0.246
- type: precision_at_3
value: 19.11
- type: precision_at_5
value: 14.344999999999999
- type: recall_at_1
value: 13.333999999999998
- type: recall_at_10
value: 37.092000000000006
- type: recall_at_100
value: 63.651
- type: recall_at_1000
value: 83.05
- type: recall_at_3
value: 23.74
- type: recall_at_5
value: 28.655
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 9.151
- type: map_at_10
value: 19.653000000000002
- type: map_at_100
value: 28.053
- type: map_at_1000
value: 29.709000000000003
- type: map_at_3
value: 14.191
- type: map_at_5
value: 16.456
- type: mrr_at_1
value: 66.25
- type: mrr_at_10
value: 74.4
- type: mrr_at_100
value: 74.715
- type: mrr_at_1000
value: 74.726
- type: mrr_at_3
value: 72.417
- type: mrr_at_5
value: 73.667
- type: ndcg_at_1
value: 54.25
- type: ndcg_at_10
value: 40.77
- type: ndcg_at_100
value: 46.359
- type: ndcg_at_1000
value: 54.193000000000005
- type: ndcg_at_3
value: 44.832
- type: ndcg_at_5
value: 42.63
- type: precision_at_1
value: 66.25
- type: precision_at_10
value: 32.175
- type: precision_at_100
value: 10.668
- type: precision_at_1000
value: 2.067
- type: precision_at_3
value: 47.667
- type: precision_at_5
value: 41.3
- type: recall_at_1
value: 9.151
- type: recall_at_10
value: 25.003999999999998
- type: recall_at_100
value: 52.976
- type: recall_at_1000
value: 78.315
- type: recall_at_3
value: 15.487
- type: recall_at_5
value: 18.999
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 51.89999999999999
- type: f1
value: 46.47777925067403
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 73.706
- type: map_at_10
value: 82.423
- type: map_at_100
value: 82.67999999999999
- type: map_at_1000
value: 82.694
- type: map_at_3
value: 81.328
- type: map_at_5
value: 82.001
- type: mrr_at_1
value: 79.613
- type: mrr_at_10
value: 87.07000000000001
- type: mrr_at_100
value: 87.169
- type: mrr_at_1000
value: 87.17
- type: mrr_at_3
value: 86.404
- type: mrr_at_5
value: 86.856
- type: ndcg_at_1
value: 79.613
- type: ndcg_at_10
value: 86.289
- type: ndcg_at_100
value: 87.201
- type: ndcg_at_1000
value: 87.428
- type: ndcg_at_3
value: 84.625
- type: ndcg_at_5
value: 85.53699999999999
- type: precision_at_1
value: 79.613
- type: precision_at_10
value: 10.399
- type: precision_at_100
value: 1.1079999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.473
- type: precision_at_5
value: 20.132
- type: recall_at_1
value: 73.706
- type: recall_at_10
value: 93.559
- type: recall_at_100
value: 97.188
- type: recall_at_1000
value: 98.555
- type: recall_at_3
value: 88.98700000000001
- type: recall_at_5
value: 91.373
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 19.841
- type: map_at_10
value: 32.643
- type: map_at_100
value: 34.575
- type: map_at_1000
value: 34.736
- type: map_at_3
value: 28.317999999999998
- type: map_at_5
value: 30.964000000000002
- type: mrr_at_1
value: 39.660000000000004
- type: mrr_at_10
value: 48.620000000000005
- type: mrr_at_100
value: 49.384
- type: mrr_at_1000
value: 49.415
- type: mrr_at_3
value: 45.988
- type: mrr_at_5
value: 47.361
- type: ndcg_at_1
value: 39.660000000000004
- type: ndcg_at_10
value: 40.646
- type: ndcg_at_100
value: 47.657
- type: ndcg_at_1000
value: 50.428
- type: ndcg_at_3
value: 36.689
- type: ndcg_at_5
value: 38.211
- type: precision_at_1
value: 39.660000000000004
- type: precision_at_10
value: 11.235000000000001
- type: precision_at_100
value: 1.8530000000000002
- type: precision_at_1000
value: 0.23600000000000002
- type: precision_at_3
value: 24.587999999999997
- type: precision_at_5
value: 18.395
- type: recall_at_1
value: 19.841
- type: recall_at_10
value: 48.135
- type: recall_at_100
value: 74.224
- type: recall_at_1000
value: 90.826
- type: recall_at_3
value: 33.536
- type: recall_at_5
value: 40.311
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 40.358
- type: map_at_10
value: 64.497
- type: map_at_100
value: 65.362
- type: map_at_1000
value: 65.41900000000001
- type: map_at_3
value: 61.06700000000001
- type: map_at_5
value: 63.317
- type: mrr_at_1
value: 80.716
- type: mrr_at_10
value: 86.10799999999999
- type: mrr_at_100
value: 86.265
- type: mrr_at_1000
value: 86.27
- type: mrr_at_3
value: 85.271
- type: mrr_at_5
value: 85.82499999999999
- type: ndcg_at_1
value: 80.716
- type: ndcg_at_10
value: 72.597
- type: ndcg_at_100
value: 75.549
- type: ndcg_at_1000
value: 76.61
- type: ndcg_at_3
value: 67.874
- type: ndcg_at_5
value: 70.655
- type: precision_at_1
value: 80.716
- type: precision_at_10
value: 15.148
- type: precision_at_100
value: 1.745
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 43.597
- type: precision_at_5
value: 28.351
- type: recall_at_1
value: 40.358
- type: recall_at_10
value: 75.739
- type: recall_at_100
value: 87.259
- type: recall_at_1000
value: 94.234
- type: recall_at_3
value: 65.39500000000001
- type: recall_at_5
value: 70.878
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 90.80799999999998
- type: ap
value: 86.81350378180757
- type: f1
value: 90.79901248314215
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 22.096
- type: map_at_10
value: 34.384
- type: map_at_100
value: 35.541
- type: map_at_1000
value: 35.589999999999996
- type: map_at_3
value: 30.496000000000002
- type: map_at_5
value: 32.718
- type: mrr_at_1
value: 22.750999999999998
- type: mrr_at_10
value: 35.024
- type: mrr_at_100
value: 36.125
- type: mrr_at_1000
value: 36.168
- type: mrr_at_3
value: 31.225
- type: mrr_at_5
value: 33.416000000000004
- type: ndcg_at_1
value: 22.750999999999998
- type: ndcg_at_10
value: 41.351
- type: ndcg_at_100
value: 46.92
- type: ndcg_at_1000
value: 48.111
- type: ndcg_at_3
value: 33.439
- type: ndcg_at_5
value: 37.407000000000004
- type: precision_at_1
value: 22.750999999999998
- type: precision_at_10
value: 6.564
- type: precision_at_100
value: 0.935
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.288
- type: precision_at_5
value: 10.581999999999999
- type: recall_at_1
value: 22.096
- type: recall_at_10
value: 62.771
- type: recall_at_100
value: 88.529
- type: recall_at_1000
value: 97.55
- type: recall_at_3
value: 41.245
- type: recall_at_5
value: 50.788
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 94.16780665754673
- type: f1
value: 93.96331194859894
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 76.90606475148198
- type: f1
value: 58.58344986604187
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 76.14660390047075
- type: f1
value: 74.31533923533614
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 80.16139878950908
- type: f1
value: 80.18532656824924
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 32.949880906135085
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 31.56300351524862
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 31.196521894371315
- type: mrr
value: 32.22644231694389
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.783
- type: map_at_10
value: 14.549000000000001
- type: map_at_100
value: 18.433
- type: map_at_1000
value: 19.949
- type: map_at_3
value: 10.936
- type: map_at_5
value: 12.514
- type: mrr_at_1
value: 47.368
- type: mrr_at_10
value: 56.42
- type: mrr_at_100
value: 56.908
- type: mrr_at_1000
value: 56.95
- type: mrr_at_3
value: 54.283
- type: mrr_at_5
value: 55.568
- type: ndcg_at_1
value: 45.666000000000004
- type: ndcg_at_10
value: 37.389
- type: ndcg_at_100
value: 34.253
- type: ndcg_at_1000
value: 43.059999999999995
- type: ndcg_at_3
value: 42.725
- type: ndcg_at_5
value: 40.193
- type: precision_at_1
value: 47.368
- type: precision_at_10
value: 27.988000000000003
- type: precision_at_100
value: 8.672
- type: precision_at_1000
value: 2.164
- type: precision_at_3
value: 40.248
- type: precision_at_5
value: 34.737
- type: recall_at_1
value: 6.783
- type: recall_at_10
value: 17.838
- type: recall_at_100
value: 33.672000000000004
- type: recall_at_1000
value: 66.166
- type: recall_at_3
value: 11.849
- type: recall_at_5
value: 14.205000000000002
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 31.698999999999998
- type: map_at_10
value: 46.556
- type: map_at_100
value: 47.652
- type: map_at_1000
value: 47.68
- type: map_at_3
value: 42.492000000000004
- type: map_at_5
value: 44.763999999999996
- type: mrr_at_1
value: 35.747
- type: mrr_at_10
value: 49.242999999999995
- type: mrr_at_100
value: 50.052
- type: mrr_at_1000
value: 50.068
- type: mrr_at_3
value: 45.867000000000004
- type: mrr_at_5
value: 47.778999999999996
- type: ndcg_at_1
value: 35.717999999999996
- type: ndcg_at_10
value: 54.14600000000001
- type: ndcg_at_100
value: 58.672999999999995
- type: ndcg_at_1000
value: 59.279
- type: ndcg_at_3
value: 46.407
- type: ndcg_at_5
value: 50.181
- type: precision_at_1
value: 35.717999999999996
- type: precision_at_10
value: 8.844000000000001
- type: precision_at_100
value: 1.139
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 20.993000000000002
- type: precision_at_5
value: 14.791000000000002
- type: recall_at_1
value: 31.698999999999998
- type: recall_at_10
value: 74.693
- type: recall_at_100
value: 94.15299999999999
- type: recall_at_1000
value: 98.585
- type: recall_at_3
value: 54.388999999999996
- type: recall_at_5
value: 63.08200000000001
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 71.283
- type: map_at_10
value: 85.24000000000001
- type: map_at_100
value: 85.882
- type: map_at_1000
value: 85.897
- type: map_at_3
value: 82.326
- type: map_at_5
value: 84.177
- type: mrr_at_1
value: 82.21000000000001
- type: mrr_at_10
value: 88.228
- type: mrr_at_100
value: 88.32
- type: mrr_at_1000
value: 88.32
- type: mrr_at_3
value: 87.323
- type: mrr_at_5
value: 87.94800000000001
- type: ndcg_at_1
value: 82.17999999999999
- type: ndcg_at_10
value: 88.9
- type: ndcg_at_100
value: 90.079
- type: ndcg_at_1000
value: 90.158
- type: ndcg_at_3
value: 86.18299999999999
- type: ndcg_at_5
value: 87.71799999999999
- type: precision_at_1
value: 82.17999999999999
- type: precision_at_10
value: 13.464
- type: precision_at_100
value: 1.533
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 37.693
- type: precision_at_5
value: 24.792
- type: recall_at_1
value: 71.283
- type: recall_at_10
value: 95.742
- type: recall_at_100
value: 99.67200000000001
- type: recall_at_1000
value: 99.981
- type: recall_at_3
value: 87.888
- type: recall_at_5
value: 92.24
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.24267063669042
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 62.88056988932578
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.903
- type: map_at_10
value: 13.202
- type: map_at_100
value: 15.5
- type: map_at_1000
value: 15.870999999999999
- type: map_at_3
value: 9.407
- type: map_at_5
value: 11.238
- type: mrr_at_1
value: 24.2
- type: mrr_at_10
value: 35.867
- type: mrr_at_100
value: 37.001
- type: mrr_at_1000
value: 37.043
- type: mrr_at_3
value: 32.5
- type: mrr_at_5
value: 34.35
- type: ndcg_at_1
value: 24.2
- type: ndcg_at_10
value: 21.731
- type: ndcg_at_100
value: 30.7
- type: ndcg_at_1000
value: 36.618
- type: ndcg_at_3
value: 20.72
- type: ndcg_at_5
value: 17.954
- type: precision_at_1
value: 24.2
- type: precision_at_10
value: 11.33
- type: precision_at_100
value: 2.4410000000000003
- type: precision_at_1000
value: 0.386
- type: precision_at_3
value: 19.667
- type: precision_at_5
value: 15.86
- type: recall_at_1
value: 4.903
- type: recall_at_10
value: 22.962
- type: recall_at_100
value: 49.563
- type: recall_at_1000
value: 78.238
- type: recall_at_3
value: 11.953
- type: recall_at_5
value: 16.067999999999998
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 84.12694254604078
- type: cos_sim_spearman
value: 80.30141815181918
- type: euclidean_pearson
value: 81.34015449877128
- type: euclidean_spearman
value: 80.13984197010849
- type: manhattan_pearson
value: 81.31767068124086
- type: manhattan_spearman
value: 80.11720513114103
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 86.13112984010417
- type: cos_sim_spearman
value: 78.03063573402875
- type: euclidean_pearson
value: 83.51928418844804
- type: euclidean_spearman
value: 78.4045235411144
- type: manhattan_pearson
value: 83.49981637388689
- type: manhattan_spearman
value: 78.4042575139372
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 82.50327987379504
- type: cos_sim_spearman
value: 84.18556767756205
- type: euclidean_pearson
value: 82.69684424327679
- type: euclidean_spearman
value: 83.5368106038335
- type: manhattan_pearson
value: 82.57967581007374
- type: manhattan_spearman
value: 83.43009053133697
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 82.50756863007814
- type: cos_sim_spearman
value: 82.27204331279108
- type: euclidean_pearson
value: 81.39535251429741
- type: euclidean_spearman
value: 81.84386626336239
- type: manhattan_pearson
value: 81.34281737280695
- type: manhattan_spearman
value: 81.81149375673166
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 86.8727714856726
- type: cos_sim_spearman
value: 87.95738287792312
- type: euclidean_pearson
value: 86.62920602795887
- type: euclidean_spearman
value: 87.05207355381243
- type: manhattan_pearson
value: 86.53587918472225
- type: manhattan_spearman
value: 86.95382961029586
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 83.52240359769479
- type: cos_sim_spearman
value: 85.47685776238286
- type: euclidean_pearson
value: 84.25815333483058
- type: euclidean_spearman
value: 85.27415639683198
- type: manhattan_pearson
value: 84.29127757025637
- type: manhattan_spearman
value: 85.30226224917351
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 86.42501708915708
- type: cos_sim_spearman
value: 86.42276182795041
- type: euclidean_pearson
value: 86.5408207354761
- type: euclidean_spearman
value: 85.46096321750838
- type: manhattan_pearson
value: 86.54177303026881
- type: manhattan_spearman
value: 85.50313151916117
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 64.86521089250766
- type: cos_sim_spearman
value: 65.94868540323003
- type: euclidean_pearson
value: 67.16569626533084
- type: euclidean_spearman
value: 66.37667004134917
- type: manhattan_pearson
value: 67.1482365102333
- type: manhattan_spearman
value: 66.53240122580029
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.64746265365318
- type: cos_sim_spearman
value: 86.41888825906786
- type: euclidean_pearson
value: 85.27453642725811
- type: euclidean_spearman
value: 85.94095796602544
- type: manhattan_pearson
value: 85.28643660505334
- type: manhattan_spearman
value: 85.95028003260744
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 87.48903153618527
- type: mrr
value: 96.41081503826601
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 58.594
- type: map_at_10
value: 69.296
- type: map_at_100
value: 69.782
- type: map_at_1000
value: 69.795
- type: map_at_3
value: 66.23
- type: map_at_5
value: 68.293
- type: mrr_at_1
value: 61.667
- type: mrr_at_10
value: 70.339
- type: mrr_at_100
value: 70.708
- type: mrr_at_1000
value: 70.722
- type: mrr_at_3
value: 68
- type: mrr_at_5
value: 69.56700000000001
- type: ndcg_at_1
value: 61.667
- type: ndcg_at_10
value: 74.039
- type: ndcg_at_100
value: 76.103
- type: ndcg_at_1000
value: 76.47800000000001
- type: ndcg_at_3
value: 68.967
- type: ndcg_at_5
value: 71.96900000000001
- type: precision_at_1
value: 61.667
- type: precision_at_10
value: 9.866999999999999
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 27.111
- type: precision_at_5
value: 18.2
- type: recall_at_1
value: 58.594
- type: recall_at_10
value: 87.422
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 99.667
- type: recall_at_3
value: 74.217
- type: recall_at_5
value: 81.539
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.85049504950496
- type: cos_sim_ap
value: 96.33111544137081
- type: cos_sim_f1
value: 92.35443037974684
- type: cos_sim_precision
value: 93.53846153846153
- type: cos_sim_recall
value: 91.2
- type: dot_accuracy
value: 99.82376237623762
- type: dot_ap
value: 95.38082527310888
- type: dot_f1
value: 90.90909090909092
- type: dot_precision
value: 92.90187891440502
- type: dot_recall
value: 89
- type: euclidean_accuracy
value: 99.84851485148515
- type: euclidean_ap
value: 96.32316003996347
- type: euclidean_f1
value: 92.2071392659628
- type: euclidean_precision
value: 92.71991911021233
- type: euclidean_recall
value: 91.7
- type: manhattan_accuracy
value: 99.84851485148515
- type: manhattan_ap
value: 96.3655668249217
- type: manhattan_f1
value: 92.18356026222895
- type: manhattan_precision
value: 92.98067141403867
- type: manhattan_recall
value: 91.4
- type: max_accuracy
value: 99.85049504950496
- type: max_ap
value: 96.3655668249217
- type: max_f1
value: 92.35443037974684
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 65.94861371629051
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 35.009430451385
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 54.61164066427969
- type: mrr
value: 55.49710603938544
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.622620124907662
- type: cos_sim_spearman
value: 31.0678351356163
- type: dot_pearson
value: 30.863727693306814
- type: dot_spearman
value: 31.230306567021255
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.22
- type: map_at_10
value: 2.011
- type: map_at_100
value: 10.974
- type: map_at_1000
value: 25.819
- type: map_at_3
value: 0.6649999999999999
- type: map_at_5
value: 1.076
- type: mrr_at_1
value: 86
- type: mrr_at_10
value: 91.8
- type: mrr_at_100
value: 91.8
- type: mrr_at_1000
value: 91.8
- type: mrr_at_3
value: 91
- type: mrr_at_5
value: 91.8
- type: ndcg_at_1
value: 82
- type: ndcg_at_10
value: 78.07300000000001
- type: ndcg_at_100
value: 58.231
- type: ndcg_at_1000
value: 51.153000000000006
- type: ndcg_at_3
value: 81.123
- type: ndcg_at_5
value: 81.059
- type: precision_at_1
value: 86
- type: precision_at_10
value: 83
- type: precision_at_100
value: 59.38
- type: precision_at_1000
value: 22.55
- type: precision_at_3
value: 87.333
- type: precision_at_5
value: 86.8
- type: recall_at_1
value: 0.22
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 14.069
- type: recall_at_1000
value: 47.678
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.161
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 2.809
- type: map_at_10
value: 10.394
- type: map_at_100
value: 16.598
- type: map_at_1000
value: 18.142
- type: map_at_3
value: 5.572
- type: map_at_5
value: 7.1370000000000005
- type: mrr_at_1
value: 32.653
- type: mrr_at_10
value: 46.564
- type: mrr_at_100
value: 47.469
- type: mrr_at_1000
value: 47.469
- type: mrr_at_3
value: 42.177
- type: mrr_at_5
value: 44.524
- type: ndcg_at_1
value: 30.612000000000002
- type: ndcg_at_10
value: 25.701
- type: ndcg_at_100
value: 37.532
- type: ndcg_at_1000
value: 48.757
- type: ndcg_at_3
value: 28.199999999999996
- type: ndcg_at_5
value: 25.987
- type: precision_at_1
value: 32.653
- type: precision_at_10
value: 23.469
- type: precision_at_100
value: 7.9799999999999995
- type: precision_at_1000
value: 1.5350000000000001
- type: precision_at_3
value: 29.932
- type: precision_at_5
value: 26.122
- type: recall_at_1
value: 2.809
- type: recall_at_10
value: 16.887
- type: recall_at_100
value: 48.67
- type: recall_at_1000
value: 82.89699999999999
- type: recall_at_3
value: 6.521000000000001
- type: recall_at_5
value: 9.609
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.57860000000001
- type: ap
value: 13.82629211536393
- type: f1
value: 54.59860966183956
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 59.38030560271647
- type: f1
value: 59.69685552567865
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 51.4736717043405
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.92853311080646
- type: cos_sim_ap
value: 77.67872502591382
- type: cos_sim_f1
value: 70.33941236068895
- type: cos_sim_precision
value: 67.63273258645884
- type: cos_sim_recall
value: 73.27176781002639
- type: dot_accuracy
value: 85.79603027954938
- type: dot_ap
value: 73.73786190233379
- type: dot_f1
value: 67.3437901774235
- type: dot_precision
value: 65.67201604814443
- type: dot_recall
value: 69.10290237467018
- type: euclidean_accuracy
value: 86.94045419324074
- type: euclidean_ap
value: 77.6687791535167
- type: euclidean_f1
value: 70.47209214023542
- type: euclidean_precision
value: 67.7207492094381
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.87488823985218
- type: manhattan_ap
value: 77.63373392430728
- type: manhattan_f1
value: 70.40920716112532
- type: manhattan_precision
value: 68.31265508684864
- type: manhattan_recall
value: 72.63852242744063
- type: max_accuracy
value: 86.94045419324074
- type: max_ap
value: 77.67872502591382
- type: max_f1
value: 70.47209214023542
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.67155664221679
- type: cos_sim_ap
value: 85.64591703003417
- type: cos_sim_f1
value: 77.59531005352656
- type: cos_sim_precision
value: 73.60967184801382
- type: cos_sim_recall
value: 82.03726516784724
- type: dot_accuracy
value: 88.41541506578181
- type: dot_ap
value: 84.6482788957769
- type: dot_f1
value: 77.04748541466657
- type: dot_precision
value: 74.02440754931176
- type: dot_recall
value: 80.3279950723745
- type: euclidean_accuracy
value: 88.63080684596576
- type: euclidean_ap
value: 85.44570045321562
- type: euclidean_f1
value: 77.28769403336106
- type: euclidean_precision
value: 72.90600040958427
- type: euclidean_recall
value: 82.22975053895904
- type: manhattan_accuracy
value: 88.59393798269105
- type: manhattan_ap
value: 85.40271361038187
- type: manhattan_f1
value: 77.17606419344392
- type: manhattan_precision
value: 72.4447747078295
- type: manhattan_recall
value: 82.5685247921158
- type: max_accuracy
value: 88.67155664221679
- type: max_ap
value: 85.64591703003417
- type: max_f1
value: 77.59531005352656
---
***See Disclaimer below***
----
# A Teradata Vantage compatible Embeddings Model
# BAAI/bge-base-en-v1.5
## Overview of this Model
An Embedding Model which maps text (sentence/ paragraphs) into a vector. The [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) model well known for its effectiveness in capturing semantic meanings in text data. It's a state-of-the-art model trained on a large corpus, capable of generating high-quality text embeddings.
- 109.48M params (Sizes in ONNX format - "fp32": 415.72MB, "int8": 104.75MB, "uint8": 104.75MB)
- 512 maximum input tokens
- 768 dimensions of output vector
- Licence: mit. The released models can be used for commercial purposes free of charge.
- Reference to Original Model: https://huggingface.co/BAAI/bge-base-en-v1.5
## Quickstart: Deploying this Model in Teradata Vantage
We have pre-converted the model into the ONNX format compatible with BYOM 6.0, eliminating the need for manual conversion.
**Note:** Ensure you have access to a Teradata Database with BYOM 6.0 installed.
To get started, clone the pre-converted model directly from the Teradata HuggingFace repository.
```python
import teradataml as tdml
import getpass
from huggingface_hub import hf_hub_download
model_name = "bge-base-en-v1.5"
number_dimensions_output = 768
model_file_name = "model.onnx"
# Step 1: Download Model from Teradata HuggingFace Page
hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"onnx/{model_file_name}", local_dir="./")
hf_hub_download(repo_id=f"Teradata/{model_name}", filename=f"tokenizer.json", local_dir="./")
# Step 2: Create Connection to Vantage
tdml.create_context(host = input('enter your hostname'),
username=input('enter your username'),
password = getpass.getpass("enter your password"))
# Step 3: Load Models into Vantage
# a) Embedding model
tdml.save_byom(model_id = model_name, # must be unique in the models table
model_file = f"onnx/{model_file_name}",
table_name = 'embeddings_models' )
# b) Tokenizer
tdml.save_byom(model_id = model_name, # must be unique in the models table
model_file = 'tokenizer.json',
table_name = 'embeddings_tokenizers')
# Step 4: Test ONNXEmbeddings Function
# Note that ONNXEmbeddings expects the 'payload' column to be 'txt'.
# If it has got a different name, just rename it in a subquery/CTE.
input_table = "emails.emails"
embeddings_query = f"""
SELECT
*
from mldb.ONNXEmbeddings(
on {input_table} as InputTable
on (select * from embeddings_models where model_id = '{model_name}') as ModelTable DIMENSION
on (select model as tokenizer from embeddings_tokenizers where model_id = '{model_name}') as TokenizerTable DIMENSION
using
Accumulate('id', 'txt')
ModelOutputTensor('sentence_embedding')
EnableMemoryCheck('false')
OutputFormat('FLOAT32({number_dimensions_output})')
OverwriteCachedModel('true')
) a
"""
DF_embeddings = tdml.DataFrame.from_query(embeddings_query)
DF_embeddings
```
## What Can I Do with the Embeddings?
Teradata Vantage includes pre-built in-database functions to process embeddings further. Explore the following examples:
- **Semantic Clustering with TD_KMeans:** [Semantic Clustering Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Clustering_Python.ipynb)
- **Semantic Distance with TD_VectorDistance:** [Semantic Similarity Python Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/Semantic_Similarity_Python.ipynb)
- **RAG-Based Application with TD_VectorDistance:** [RAG and Bedrock Query PDF Notebook](https://github.com/Teradata/jupyter-demos/blob/main/UseCases/Language_Models_InVantage/RAG_and_Bedrock_QueryPDF.ipynb)
## Deep Dive into Model Conversion to ONNX
**The steps below outline how we converted the open-source Hugging Face model into an ONNX file compatible with the in-database ONNXEmbeddings function.**
You do not need to perform these steps—they are provided solely for documentation and transparency. However, they may be helpful if you wish to convert another model to the required format.
### Part 1. Importing and Converting Model using optimum
We start by importing the pre-trained [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) model from Hugging Face.
To enhance performance and ensure compatibility with various execution environments, we'll use the [Optimum](https://github.com/huggingface/optimum) utility to convert the model into the ONNX (Open Neural Network Exchange) format.
After conversion to ONNX, we are fixing the opset in the ONNX file for compatibility with ONNX runtime used in Teradata Vantage
We are generating ONNX files for multiple different precisions: fp32, int8, uint8
You can find the detailed conversion steps in the file [convert.py](./convert.py)
### Part 2. Running the model in Python with onnxruntime & compare results
Once the fixes are applied, we proceed to test the correctness of the ONNX model by calculating cosine similarity between two texts using native SentenceTransformers and ONNX runtime, comparing the results.
If the results are identical, it confirms that the ONNX model gives the same result as the native models, validating its correctness and suitability for further use in the database.
```python
import onnxruntime as rt
from sentence_transformers.util import cos_sim
from sentence_transformers import SentenceTransformer
import transformers
sentences_1 = 'How is the weather today?'
sentences_2 = 'What is the current weather like today?'
# Calculate ONNX result
tokenizer = transformers.AutoTokenizer.from_pretrained("BAAI/bge-base-en-v1.5")
predef_sess = rt.InferenceSession("onnx/model.onnx")
enc1 = tokenizer(sentences_1)
embeddings_1_onnx = predef_sess.run(None, {"input_ids": [enc1.input_ids],
"attention_mask": [enc1.attention_mask]})
enc2 = tokenizer(sentences_2)
embeddings_2_onnx = predef_sess.run(None, {"input_ids": [enc2.input_ids],
"attention_mask": [enc2.attention_mask]})
# Calculate embeddings with SentenceTransformer
model = SentenceTransformer(model_id, trust_remote_code=True)
embeddings_1_sentence_transformer = model.encode(sentences_1, normalize_embeddings=True, trust_remote_code=True)
embeddings_2_sentence_transformer = model.encode(sentences_2, normalize_embeddings=True, trust_remote_code=True)
# Compare results
print("Cosine similiarity for embeddings calculated with ONNX:" + str(cos_sim(embeddings_1_onnx[1][0], embeddings_2_onnx[1][0])))
print("Cosine similiarity for embeddings calculated with SentenceTransformer:" + str(cos_sim(embeddings_1_sentence_transformer, embeddings_2_sentence_transformer)))
```
You can find the detailed ONNX vs. SentenceTransformer result comparison steps in the file [test_local.py](./test_local.py)
----
DISCLAIMER: The content herein (“Content”) is provided “AS IS” and is not covered by any Teradata Operations, Inc. and its affiliates (“Teradata”) agreements. Its listing here does not constitute certification or endorsement by Teradata.
To the extent any of the Content contains or is related to any artificial intelligence (“AI”) or other language learning models (“Models”) that interoperate with the products and services of Teradata, by accessing, bringing, deploying or using such Models, you acknowledge and agree that you are solely responsible for ensuring compliance with all applicable laws, regulations, and restrictions governing the use, deployment, and distribution of AI technologies. This includes, but is not limited to, AI Diffusion Rules, European Union AI Act, AI-related laws and regulations, privacy laws, export controls, and financial or sector-specific regulations.
While Teradata may provide support, guidance, or assistance in the deployment or implementation of Models to interoperate with Teradata’s products and/or services, you remain fully responsible for ensuring that your Models, data, and applications comply with all relevant legal and regulatory obligations. Our assistance does not constitute legal or regulatory approval, and Teradata disclaims any liability arising from non-compliance with applicable laws.
You must determine the suitability of the Models for any purpose. Given the probabilistic nature of machine learning and modeling, the use of the Models may in some situations result in incorrect output that does not accurately reflect the action generated. You should evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output. | [
"SEMANTIC_SIMILARITY",
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF | Hoshino-Yumetsuki | sentence-similarity | [
"sentence-transformers",
"gguf",
"mteb",
"transformers",
"Qwen2",
"sentence-similarity",
"llama-cpp",
"gguf-my-repo",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:quantized:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | 2025-03-02T13:40:04 | 2025-03-02T13:40:40 | 32 | 0 | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
license: apache-2.0
tags:
- mteb
- sentence-transformers
- transformers
- Qwen2
- sentence-similarity
- llama-cpp
- gguf-my-repo
model-index:
- name: gte-qwen2-7B-instruct
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 91.31343283582089
- type: ap
value: 67.64251402604096
- type: f1
value: 87.53372530755692
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 97.497825
- type: ap
value: 96.30329547047529
- type: f1
value: 97.49769793778039
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 62.564
- type: f1
value: 60.975777935041066
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 36.486000000000004
- type: map_at_10
value: 54.842
- type: map_at_100
value: 55.206999999999994
- type: map_at_1000
value: 55.206999999999994
- type: map_at_3
value: 49.893
- type: map_at_5
value: 53.105000000000004
- type: mrr_at_1
value: 37.34
- type: mrr_at_10
value: 55.143
- type: mrr_at_100
value: 55.509
- type: mrr_at_1000
value: 55.509
- type: mrr_at_3
value: 50.212999999999994
- type: mrr_at_5
value: 53.432
- type: ndcg_at_1
value: 36.486000000000004
- type: ndcg_at_10
value: 64.273
- type: ndcg_at_100
value: 65.66199999999999
- type: ndcg_at_1000
value: 65.66199999999999
- type: ndcg_at_3
value: 54.352999999999994
- type: ndcg_at_5
value: 60.131
- type: precision_at_1
value: 36.486000000000004
- type: precision_at_10
value: 9.395000000000001
- type: precision_at_100
value: 0.996
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 22.428
- type: precision_at_5
value: 16.259
- type: recall_at_1
value: 36.486000000000004
- type: recall_at_10
value: 93.95400000000001
- type: recall_at_100
value: 99.644
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 67.283
- type: recall_at_5
value: 81.294
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 56.461169803700564
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 51.73600434466286
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 67.57827065898053
- type: mrr
value: 79.08136569493911
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 83.53324575999243
- type: cos_sim_spearman
value: 81.37173362822374
- type: euclidean_pearson
value: 82.19243335103444
- type: euclidean_spearman
value: 81.33679307304334
- type: manhattan_pearson
value: 82.38752665975699
- type: manhattan_spearman
value: 81.31510583189689
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 87.56818181818181
- type: f1
value: 87.25826722019875
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 50.09239610327673
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 46.64733054606282
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 33.997
- type: map_at_10
value: 48.176
- type: map_at_100
value: 49.82
- type: map_at_1000
value: 49.924
- type: map_at_3
value: 43.626
- type: map_at_5
value: 46.275
- type: mrr_at_1
value: 42.059999999999995
- type: mrr_at_10
value: 53.726
- type: mrr_at_100
value: 54.398
- type: mrr_at_1000
value: 54.416
- type: mrr_at_3
value: 50.714999999999996
- type: mrr_at_5
value: 52.639
- type: ndcg_at_1
value: 42.059999999999995
- type: ndcg_at_10
value: 55.574999999999996
- type: ndcg_at_100
value: 60.744
- type: ndcg_at_1000
value: 61.85699999999999
- type: ndcg_at_3
value: 49.363
- type: ndcg_at_5
value: 52.44
- type: precision_at_1
value: 42.059999999999995
- type: precision_at_10
value: 11.101999999999999
- type: precision_at_100
value: 1.73
- type: precision_at_1000
value: 0.218
- type: precision_at_3
value: 24.464
- type: precision_at_5
value: 18.026
- type: recall_at_1
value: 33.997
- type: recall_at_10
value: 70.35900000000001
- type: recall_at_100
value: 91.642
- type: recall_at_1000
value: 97.977
- type: recall_at_3
value: 52.76
- type: recall_at_5
value: 61.148
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 35.884
- type: map_at_10
value: 48.14
- type: map_at_100
value: 49.5
- type: map_at_1000
value: 49.63
- type: map_at_3
value: 44.646
- type: map_at_5
value: 46.617999999999995
- type: mrr_at_1
value: 44.458999999999996
- type: mrr_at_10
value: 53.751000000000005
- type: mrr_at_100
value: 54.37800000000001
- type: mrr_at_1000
value: 54.415
- type: mrr_at_3
value: 51.815
- type: mrr_at_5
value: 52.882
- type: ndcg_at_1
value: 44.458999999999996
- type: ndcg_at_10
value: 54.157
- type: ndcg_at_100
value: 58.362
- type: ndcg_at_1000
value: 60.178
- type: ndcg_at_3
value: 49.661
- type: ndcg_at_5
value: 51.74999999999999
- type: precision_at_1
value: 44.458999999999996
- type: precision_at_10
value: 10.248
- type: precision_at_100
value: 1.5890000000000002
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 23.928
- type: precision_at_5
value: 16.878999999999998
- type: recall_at_1
value: 35.884
- type: recall_at_10
value: 64.798
- type: recall_at_100
value: 82.345
- type: recall_at_1000
value: 93.267
- type: recall_at_3
value: 51.847
- type: recall_at_5
value: 57.601
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 39.383
- type: map_at_10
value: 53.714
- type: map_at_100
value: 54.838
- type: map_at_1000
value: 54.87800000000001
- type: map_at_3
value: 50.114999999999995
- type: map_at_5
value: 52.153000000000006
- type: mrr_at_1
value: 45.016
- type: mrr_at_10
value: 56.732000000000006
- type: mrr_at_100
value: 57.411
- type: mrr_at_1000
value: 57.431
- type: mrr_at_3
value: 54.044000000000004
- type: mrr_at_5
value: 55.639
- type: ndcg_at_1
value: 45.016
- type: ndcg_at_10
value: 60.228
- type: ndcg_at_100
value: 64.277
- type: ndcg_at_1000
value: 65.07
- type: ndcg_at_3
value: 54.124
- type: ndcg_at_5
value: 57.147000000000006
- type: precision_at_1
value: 45.016
- type: precision_at_10
value: 9.937
- type: precision_at_100
value: 1.288
- type: precision_at_1000
value: 0.13899999999999998
- type: precision_at_3
value: 24.471999999999998
- type: precision_at_5
value: 16.991
- type: recall_at_1
value: 39.383
- type: recall_at_10
value: 76.175
- type: recall_at_100
value: 93.02
- type: recall_at_1000
value: 98.60900000000001
- type: recall_at_3
value: 60.265
- type: recall_at_5
value: 67.46600000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 27.426000000000002
- type: map_at_10
value: 37.397000000000006
- type: map_at_100
value: 38.61
- type: map_at_1000
value: 38.678000000000004
- type: map_at_3
value: 34.150999999999996
- type: map_at_5
value: 36.137
- type: mrr_at_1
value: 29.944
- type: mrr_at_10
value: 39.654
- type: mrr_at_100
value: 40.638000000000005
- type: mrr_at_1000
value: 40.691
- type: mrr_at_3
value: 36.817
- type: mrr_at_5
value: 38.524
- type: ndcg_at_1
value: 29.944
- type: ndcg_at_10
value: 43.094
- type: ndcg_at_100
value: 48.789
- type: ndcg_at_1000
value: 50.339999999999996
- type: ndcg_at_3
value: 36.984
- type: ndcg_at_5
value: 40.248
- type: precision_at_1
value: 29.944
- type: precision_at_10
value: 6.78
- type: precision_at_100
value: 1.024
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 15.895000000000001
- type: precision_at_5
value: 11.39
- type: recall_at_1
value: 27.426000000000002
- type: recall_at_10
value: 58.464000000000006
- type: recall_at_100
value: 84.193
- type: recall_at_1000
value: 95.52000000000001
- type: recall_at_3
value: 42.172
- type: recall_at_5
value: 50.101
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 19.721
- type: map_at_10
value: 31.604
- type: map_at_100
value: 32.972
- type: map_at_1000
value: 33.077
- type: map_at_3
value: 27.218999999999998
- type: map_at_5
value: 29.53
- type: mrr_at_1
value: 25.0
- type: mrr_at_10
value: 35.843
- type: mrr_at_100
value: 36.785000000000004
- type: mrr_at_1000
value: 36.842000000000006
- type: mrr_at_3
value: 32.193
- type: mrr_at_5
value: 34.264
- type: ndcg_at_1
value: 25.0
- type: ndcg_at_10
value: 38.606
- type: ndcg_at_100
value: 44.272
- type: ndcg_at_1000
value: 46.527
- type: ndcg_at_3
value: 30.985000000000003
- type: ndcg_at_5
value: 34.43
- type: precision_at_1
value: 25.0
- type: precision_at_10
value: 7.811
- type: precision_at_100
value: 1.203
- type: precision_at_1000
value: 0.15
- type: precision_at_3
value: 15.423
- type: precision_at_5
value: 11.791
- type: recall_at_1
value: 19.721
- type: recall_at_10
value: 55.625
- type: recall_at_100
value: 79.34400000000001
- type: recall_at_1000
value: 95.208
- type: recall_at_3
value: 35.19
- type: recall_at_5
value: 43.626
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 33.784
- type: map_at_10
value: 47.522
- type: map_at_100
value: 48.949999999999996
- type: map_at_1000
value: 49.038
- type: map_at_3
value: 43.284
- type: map_at_5
value: 45.629
- type: mrr_at_1
value: 41.482
- type: mrr_at_10
value: 52.830999999999996
- type: mrr_at_100
value: 53.559999999999995
- type: mrr_at_1000
value: 53.588
- type: mrr_at_3
value: 50.016000000000005
- type: mrr_at_5
value: 51.614000000000004
- type: ndcg_at_1
value: 41.482
- type: ndcg_at_10
value: 54.569
- type: ndcg_at_100
value: 59.675999999999995
- type: ndcg_at_1000
value: 60.989000000000004
- type: ndcg_at_3
value: 48.187000000000005
- type: ndcg_at_5
value: 51.183
- type: precision_at_1
value: 41.482
- type: precision_at_10
value: 10.221
- type: precision_at_100
value: 1.486
- type: precision_at_1000
value: 0.17500000000000002
- type: precision_at_3
value: 23.548
- type: precision_at_5
value: 16.805
- type: recall_at_1
value: 33.784
- type: recall_at_10
value: 69.798
- type: recall_at_100
value: 90.098
- type: recall_at_1000
value: 98.176
- type: recall_at_3
value: 52.127
- type: recall_at_5
value: 59.861
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 28.038999999999998
- type: map_at_10
value: 41.904
- type: map_at_100
value: 43.36
- type: map_at_1000
value: 43.453
- type: map_at_3
value: 37.785999999999994
- type: map_at_5
value: 40.105000000000004
- type: mrr_at_1
value: 35.046
- type: mrr_at_10
value: 46.926
- type: mrr_at_100
value: 47.815000000000005
- type: mrr_at_1000
value: 47.849000000000004
- type: mrr_at_3
value: 44.273
- type: mrr_at_5
value: 45.774
- type: ndcg_at_1
value: 35.046
- type: ndcg_at_10
value: 48.937000000000005
- type: ndcg_at_100
value: 54.544000000000004
- type: ndcg_at_1000
value: 56.069
- type: ndcg_at_3
value: 42.858000000000004
- type: ndcg_at_5
value: 45.644
- type: precision_at_1
value: 35.046
- type: precision_at_10
value: 9.452
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.173
- type: precision_at_3
value: 21.346999999999998
- type: precision_at_5
value: 15.342
- type: recall_at_1
value: 28.038999999999998
- type: recall_at_10
value: 64.59700000000001
- type: recall_at_100
value: 87.735
- type: recall_at_1000
value: 97.41300000000001
- type: recall_at_3
value: 47.368
- type: recall_at_5
value: 54.93900000000001
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 28.17291666666667
- type: map_at_10
value: 40.025749999999995
- type: map_at_100
value: 41.39208333333333
- type: map_at_1000
value: 41.499249999999996
- type: map_at_3
value: 36.347
- type: map_at_5
value: 38.41391666666667
- type: mrr_at_1
value: 33.65925
- type: mrr_at_10
value: 44.085499999999996
- type: mrr_at_100
value: 44.94116666666667
- type: mrr_at_1000
value: 44.9855
- type: mrr_at_3
value: 41.2815
- type: mrr_at_5
value: 42.91491666666666
- type: ndcg_at_1
value: 33.65925
- type: ndcg_at_10
value: 46.430833333333325
- type: ndcg_at_100
value: 51.761
- type: ndcg_at_1000
value: 53.50899999999999
- type: ndcg_at_3
value: 40.45133333333333
- type: ndcg_at_5
value: 43.31483333333334
- type: precision_at_1
value: 33.65925
- type: precision_at_10
value: 8.4995
- type: precision_at_100
value: 1.3210000000000004
- type: precision_at_1000
value: 0.16591666666666666
- type: precision_at_3
value: 19.165083333333335
- type: precision_at_5
value: 13.81816666666667
- type: recall_at_1
value: 28.17291666666667
- type: recall_at_10
value: 61.12624999999999
- type: recall_at_100
value: 83.97266666666667
- type: recall_at_1000
value: 95.66550000000001
- type: recall_at_3
value: 44.661249999999995
- type: recall_at_5
value: 51.983333333333334
- type: map_at_1
value: 17.936
- type: map_at_10
value: 27.399
- type: map_at_100
value: 28.632
- type: map_at_1000
value: 28.738000000000003
- type: map_at_3
value: 24.456
- type: map_at_5
value: 26.06
- type: mrr_at_1
value: 19.224
- type: mrr_at_10
value: 28.998
- type: mrr_at_100
value: 30.11
- type: mrr_at_1000
value: 30.177
- type: mrr_at_3
value: 26.247999999999998
- type: mrr_at_5
value: 27.708
- type: ndcg_at_1
value: 19.224
- type: ndcg_at_10
value: 32.911
- type: ndcg_at_100
value: 38.873999999999995
- type: ndcg_at_1000
value: 41.277
- type: ndcg_at_3
value: 27.142
- type: ndcg_at_5
value: 29.755
- type: precision_at_1
value: 19.224
- type: precision_at_10
value: 5.6930000000000005
- type: precision_at_100
value: 0.9259999999999999
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 12.138
- type: precision_at_5
value: 8.909
- type: recall_at_1
value: 17.936
- type: recall_at_10
value: 48.096
- type: recall_at_100
value: 75.389
- type: recall_at_1000
value: 92.803
- type: recall_at_3
value: 32.812999999999995
- type: recall_at_5
value: 38.851
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 24.681
- type: map_at_10
value: 34.892
- type: map_at_100
value: 35.996
- type: map_at_1000
value: 36.083
- type: map_at_3
value: 31.491999999999997
- type: map_at_5
value: 33.632
- type: mrr_at_1
value: 28.528
- type: mrr_at_10
value: 37.694
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.668
- type: mrr_at_3
value: 34.714
- type: mrr_at_5
value: 36.616
- type: ndcg_at_1
value: 28.528
- type: ndcg_at_10
value: 40.703
- type: ndcg_at_100
value: 45.993
- type: ndcg_at_1000
value: 47.847
- type: ndcg_at_3
value: 34.622
- type: ndcg_at_5
value: 38.035999999999994
- type: precision_at_1
value: 28.528
- type: precision_at_10
value: 6.902
- type: precision_at_100
value: 1.0370000000000001
- type: precision_at_1000
value: 0.126
- type: precision_at_3
value: 15.798000000000002
- type: precision_at_5
value: 11.655999999999999
- type: recall_at_1
value: 24.681
- type: recall_at_10
value: 55.81
- type: recall_at_100
value: 79.785
- type: recall_at_1000
value: 92.959
- type: recall_at_3
value: 39.074
- type: recall_at_5
value: 47.568
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 18.627
- type: map_at_10
value: 27.872000000000003
- type: map_at_100
value: 29.237999999999996
- type: map_at_1000
value: 29.363
- type: map_at_3
value: 24.751
- type: map_at_5
value: 26.521
- type: mrr_at_1
value: 23.021
- type: mrr_at_10
value: 31.924000000000003
- type: mrr_at_100
value: 32.922000000000004
- type: mrr_at_1000
value: 32.988
- type: mrr_at_3
value: 29.192
- type: mrr_at_5
value: 30.798
- type: ndcg_at_1
value: 23.021
- type: ndcg_at_10
value: 33.535
- type: ndcg_at_100
value: 39.732
- type: ndcg_at_1000
value: 42.201
- type: ndcg_at_3
value: 28.153
- type: ndcg_at_5
value: 30.746000000000002
- type: precision_at_1
value: 23.021
- type: precision_at_10
value: 6.459
- type: precision_at_100
value: 1.1320000000000001
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 13.719000000000001
- type: precision_at_5
value: 10.193000000000001
- type: recall_at_1
value: 18.627
- type: recall_at_10
value: 46.463
- type: recall_at_100
value: 74.226
- type: recall_at_1000
value: 91.28500000000001
- type: recall_at_3
value: 31.357000000000003
- type: recall_at_5
value: 38.067
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 31.457
- type: map_at_10
value: 42.888
- type: map_at_100
value: 44.24
- type: map_at_1000
value: 44.327
- type: map_at_3
value: 39.588
- type: map_at_5
value: 41.423
- type: mrr_at_1
value: 37.126999999999995
- type: mrr_at_10
value: 47.083000000000006
- type: mrr_at_100
value: 47.997
- type: mrr_at_1000
value: 48.044
- type: mrr_at_3
value: 44.574000000000005
- type: mrr_at_5
value: 46.202
- type: ndcg_at_1
value: 37.126999999999995
- type: ndcg_at_10
value: 48.833
- type: ndcg_at_100
value: 54.327000000000005
- type: ndcg_at_1000
value: 56.011
- type: ndcg_at_3
value: 43.541999999999994
- type: ndcg_at_5
value: 46.127
- type: precision_at_1
value: 37.126999999999995
- type: precision_at_10
value: 8.376999999999999
- type: precision_at_100
value: 1.2309999999999999
- type: precision_at_1000
value: 0.146
- type: precision_at_3
value: 20.211000000000002
- type: precision_at_5
value: 14.16
- type: recall_at_1
value: 31.457
- type: recall_at_10
value: 62.369
- type: recall_at_100
value: 85.444
- type: recall_at_1000
value: 96.65599999999999
- type: recall_at_3
value: 47.961
- type: recall_at_5
value: 54.676
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 27.139999999999997
- type: map_at_10
value: 38.801
- type: map_at_100
value: 40.549
- type: map_at_1000
value: 40.802
- type: map_at_3
value: 35.05
- type: map_at_5
value: 36.884
- type: mrr_at_1
value: 33.004
- type: mrr_at_10
value: 43.864
- type: mrr_at_100
value: 44.667
- type: mrr_at_1000
value: 44.717
- type: mrr_at_3
value: 40.777
- type: mrr_at_5
value: 42.319
- type: ndcg_at_1
value: 33.004
- type: ndcg_at_10
value: 46.022
- type: ndcg_at_100
value: 51.542
- type: ndcg_at_1000
value: 53.742000000000004
- type: ndcg_at_3
value: 39.795
- type: ndcg_at_5
value: 42.272
- type: precision_at_1
value: 33.004
- type: precision_at_10
value: 9.012
- type: precision_at_100
value: 1.7770000000000001
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 19.038
- type: precision_at_5
value: 13.675999999999998
- type: recall_at_1
value: 27.139999999999997
- type: recall_at_10
value: 60.961
- type: recall_at_100
value: 84.451
- type: recall_at_1000
value: 98.113
- type: recall_at_3
value: 43.001
- type: recall_at_5
value: 49.896
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 22.076999999999998
- type: map_at_10
value: 35.44
- type: map_at_100
value: 37.651
- type: map_at_1000
value: 37.824999999999996
- type: map_at_3
value: 30.764999999999997
- type: map_at_5
value: 33.26
- type: mrr_at_1
value: 50.163000000000004
- type: mrr_at_10
value: 61.207
- type: mrr_at_100
value: 61.675000000000004
- type: mrr_at_1000
value: 61.692
- type: mrr_at_3
value: 58.60999999999999
- type: mrr_at_5
value: 60.307
- type: ndcg_at_1
value: 50.163000000000004
- type: ndcg_at_10
value: 45.882
- type: ndcg_at_100
value: 53.239999999999995
- type: ndcg_at_1000
value: 55.852000000000004
- type: ndcg_at_3
value: 40.514
- type: ndcg_at_5
value: 42.038
- type: precision_at_1
value: 50.163000000000004
- type: precision_at_10
value: 13.466000000000001
- type: precision_at_100
value: 2.164
- type: precision_at_1000
value: 0.266
- type: precision_at_3
value: 29.707
- type: precision_at_5
value: 21.694
- type: recall_at_1
value: 22.076999999999998
- type: recall_at_10
value: 50.193
- type: recall_at_100
value: 74.993
- type: recall_at_1000
value: 89.131
- type: recall_at_3
value: 35.472
- type: recall_at_5
value: 41.814
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.953
- type: map_at_10
value: 24.515
- type: map_at_100
value: 36.173
- type: map_at_1000
value: 38.351
- type: map_at_3
value: 16.592000000000002
- type: map_at_5
value: 20.036
- type: mrr_at_1
value: 74.25
- type: mrr_at_10
value: 81.813
- type: mrr_at_100
value: 82.006
- type: mrr_at_1000
value: 82.011
- type: mrr_at_3
value: 80.875
- type: mrr_at_5
value: 81.362
- type: ndcg_at_1
value: 62.5
- type: ndcg_at_10
value: 52.42
- type: ndcg_at_100
value: 56.808
- type: ndcg_at_1000
value: 63.532999999999994
- type: ndcg_at_3
value: 56.654
- type: ndcg_at_5
value: 54.18300000000001
- type: precision_at_1
value: 74.25
- type: precision_at_10
value: 42.699999999999996
- type: precision_at_100
value: 13.675
- type: precision_at_1000
value: 2.664
- type: precision_at_3
value: 60.5
- type: precision_at_5
value: 52.800000000000004
- type: recall_at_1
value: 9.953
- type: recall_at_10
value: 30.253999999999998
- type: recall_at_100
value: 62.516000000000005
- type: recall_at_1000
value: 84.163
- type: recall_at_3
value: 18.13
- type: recall_at_5
value: 22.771
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 79.455
- type: f1
value: 74.16798697647569
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 87.531
- type: map_at_10
value: 93.16799999999999
- type: map_at_100
value: 93.341
- type: map_at_1000
value: 93.349
- type: map_at_3
value: 92.444
- type: map_at_5
value: 92.865
- type: mrr_at_1
value: 94.014
- type: mrr_at_10
value: 96.761
- type: mrr_at_100
value: 96.762
- type: mrr_at_1000
value: 96.762
- type: mrr_at_3
value: 96.672
- type: mrr_at_5
value: 96.736
- type: ndcg_at_1
value: 94.014
- type: ndcg_at_10
value: 95.112
- type: ndcg_at_100
value: 95.578
- type: ndcg_at_1000
value: 95.68900000000001
- type: ndcg_at_3
value: 94.392
- type: ndcg_at_5
value: 94.72500000000001
- type: precision_at_1
value: 94.014
- type: precision_at_10
value: 11.065
- type: precision_at_100
value: 1.157
- type: precision_at_1000
value: 0.11800000000000001
- type: precision_at_3
value: 35.259
- type: precision_at_5
value: 21.599
- type: recall_at_1
value: 87.531
- type: recall_at_10
value: 97.356
- type: recall_at_100
value: 98.965
- type: recall_at_1000
value: 99.607
- type: recall_at_3
value: 95.312
- type: recall_at_5
value: 96.295
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 32.055
- type: map_at_10
value: 53.114
- type: map_at_100
value: 55.235
- type: map_at_1000
value: 55.345
- type: map_at_3
value: 45.854
- type: map_at_5
value: 50.025
- type: mrr_at_1
value: 60.34
- type: mrr_at_10
value: 68.804
- type: mrr_at_100
value: 69.309
- type: mrr_at_1000
value: 69.32199999999999
- type: mrr_at_3
value: 66.40899999999999
- type: mrr_at_5
value: 67.976
- type: ndcg_at_1
value: 60.34
- type: ndcg_at_10
value: 62.031000000000006
- type: ndcg_at_100
value: 68.00500000000001
- type: ndcg_at_1000
value: 69.286
- type: ndcg_at_3
value: 56.355999999999995
- type: ndcg_at_5
value: 58.687
- type: precision_at_1
value: 60.34
- type: precision_at_10
value: 17.176
- type: precision_at_100
value: 2.36
- type: precision_at_1000
value: 0.259
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 27.809
- type: recall_at_1
value: 32.055
- type: recall_at_10
value: 70.91
- type: recall_at_100
value: 91.83
- type: recall_at_1000
value: 98.871
- type: recall_at_3
value: 51.202999999999996
- type: recall_at_5
value: 60.563
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 43.68
- type: map_at_10
value: 64.389
- type: map_at_100
value: 65.24
- type: map_at_1000
value: 65.303
- type: map_at_3
value: 61.309000000000005
- type: map_at_5
value: 63.275999999999996
- type: mrr_at_1
value: 87.36
- type: mrr_at_10
value: 91.12
- type: mrr_at_100
value: 91.227
- type: mrr_at_1000
value: 91.229
- type: mrr_at_3
value: 90.57600000000001
- type: mrr_at_5
value: 90.912
- type: ndcg_at_1
value: 87.36
- type: ndcg_at_10
value: 73.076
- type: ndcg_at_100
value: 75.895
- type: ndcg_at_1000
value: 77.049
- type: ndcg_at_3
value: 68.929
- type: ndcg_at_5
value: 71.28
- type: precision_at_1
value: 87.36
- type: precision_at_10
value: 14.741000000000001
- type: precision_at_100
value: 1.694
- type: precision_at_1000
value: 0.185
- type: precision_at_3
value: 43.043
- type: precision_at_5
value: 27.681
- type: recall_at_1
value: 43.68
- type: recall_at_10
value: 73.707
- type: recall_at_100
value: 84.7
- type: recall_at_1000
value: 92.309
- type: recall_at_3
value: 64.564
- type: recall_at_5
value: 69.203
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 96.75399999999999
- type: ap
value: 95.29389839242187
- type: f1
value: 96.75348377433475
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 25.176
- type: map_at_10
value: 38.598
- type: map_at_100
value: 39.707
- type: map_at_1000
value: 39.744
- type: map_at_3
value: 34.566
- type: map_at_5
value: 36.863
- type: mrr_at_1
value: 25.874000000000002
- type: mrr_at_10
value: 39.214
- type: mrr_at_100
value: 40.251
- type: mrr_at_1000
value: 40.281
- type: mrr_at_3
value: 35.291
- type: mrr_at_5
value: 37.545
- type: ndcg_at_1
value: 25.874000000000002
- type: ndcg_at_10
value: 45.98
- type: ndcg_at_100
value: 51.197
- type: ndcg_at_1000
value: 52.073
- type: ndcg_at_3
value: 37.785999999999994
- type: ndcg_at_5
value: 41.870000000000005
- type: precision_at_1
value: 25.874000000000002
- type: precision_at_10
value: 7.181
- type: precision_at_100
value: 0.979
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 16.051000000000002
- type: precision_at_5
value: 11.713
- type: recall_at_1
value: 25.176
- type: recall_at_10
value: 68.67699999999999
- type: recall_at_100
value: 92.55
- type: recall_at_1000
value: 99.164
- type: recall_at_3
value: 46.372
- type: recall_at_5
value: 56.16
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 99.03784769721841
- type: f1
value: 98.97791641821495
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 91.88326493388054
- type: f1
value: 73.74809928034335
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 85.41358439811701
- type: f1
value: 83.503679460639
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 89.77135171486215
- type: f1
value: 88.89843747468366
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 46.22695362087359
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 44.132372165849425
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 33.35680810650402
- type: mrr
value: 34.72625715637218
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 7.165000000000001
- type: map_at_10
value: 15.424
- type: map_at_100
value: 20.28
- type: map_at_1000
value: 22.065
- type: map_at_3
value: 11.236
- type: map_at_5
value: 13.025999999999998
- type: mrr_at_1
value: 51.702999999999996
- type: mrr_at_10
value: 59.965
- type: mrr_at_100
value: 60.667
- type: mrr_at_1000
value: 60.702999999999996
- type: mrr_at_3
value: 58.772000000000006
- type: mrr_at_5
value: 59.267
- type: ndcg_at_1
value: 49.536
- type: ndcg_at_10
value: 40.6
- type: ndcg_at_100
value: 37.848
- type: ndcg_at_1000
value: 46.657
- type: ndcg_at_3
value: 46.117999999999995
- type: ndcg_at_5
value: 43.619
- type: precision_at_1
value: 51.393
- type: precision_at_10
value: 30.31
- type: precision_at_100
value: 9.972
- type: precision_at_1000
value: 2.329
- type: precision_at_3
value: 43.137
- type: precision_at_5
value: 37.585
- type: recall_at_1
value: 7.165000000000001
- type: recall_at_10
value: 19.689999999999998
- type: recall_at_100
value: 39.237
- type: recall_at_1000
value: 71.417
- type: recall_at_3
value: 12.247
- type: recall_at_5
value: 14.902999999999999
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 42.653999999999996
- type: map_at_10
value: 59.611999999999995
- type: map_at_100
value: 60.32300000000001
- type: map_at_1000
value: 60.336
- type: map_at_3
value: 55.584999999999994
- type: map_at_5
value: 58.19
- type: mrr_at_1
value: 47.683
- type: mrr_at_10
value: 62.06700000000001
- type: mrr_at_100
value: 62.537
- type: mrr_at_1000
value: 62.544999999999995
- type: mrr_at_3
value: 59.178
- type: mrr_at_5
value: 61.034
- type: ndcg_at_1
value: 47.654
- type: ndcg_at_10
value: 67.001
- type: ndcg_at_100
value: 69.73899999999999
- type: ndcg_at_1000
value: 69.986
- type: ndcg_at_3
value: 59.95700000000001
- type: ndcg_at_5
value: 64.025
- type: precision_at_1
value: 47.654
- type: precision_at_10
value: 10.367999999999999
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 26.651000000000003
- type: precision_at_5
value: 18.459
- type: recall_at_1
value: 42.653999999999996
- type: recall_at_10
value: 86.619
- type: recall_at_100
value: 98.04899999999999
- type: recall_at_1000
value: 99.812
- type: recall_at_3
value: 68.987
- type: recall_at_5
value: 78.158
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 72.538
- type: map_at_10
value: 86.702
- type: map_at_100
value: 87.31
- type: map_at_1000
value: 87.323
- type: map_at_3
value: 83.87
- type: map_at_5
value: 85.682
- type: mrr_at_1
value: 83.31
- type: mrr_at_10
value: 89.225
- type: mrr_at_100
value: 89.30399999999999
- type: mrr_at_1000
value: 89.30399999999999
- type: mrr_at_3
value: 88.44300000000001
- type: mrr_at_5
value: 89.005
- type: ndcg_at_1
value: 83.32000000000001
- type: ndcg_at_10
value: 90.095
- type: ndcg_at_100
value: 91.12
- type: ndcg_at_1000
value: 91.179
- type: ndcg_at_3
value: 87.606
- type: ndcg_at_5
value: 89.031
- type: precision_at_1
value: 83.32000000000001
- type: precision_at_10
value: 13.641
- type: precision_at_100
value: 1.541
- type: precision_at_1000
value: 0.157
- type: precision_at_3
value: 38.377
- type: precision_at_5
value: 25.162000000000003
- type: recall_at_1
value: 72.538
- type: recall_at_10
value: 96.47200000000001
- type: recall_at_100
value: 99.785
- type: recall_at_1000
value: 99.99900000000001
- type: recall_at_3
value: 89.278
- type: recall_at_5
value: 93.367
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 73.55219145406065
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 74.13437105242755
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 6.873
- type: map_at_10
value: 17.944
- type: map_at_100
value: 21.171
- type: map_at_1000
value: 21.528
- type: map_at_3
value: 12.415
- type: map_at_5
value: 15.187999999999999
- type: mrr_at_1
value: 33.800000000000004
- type: mrr_at_10
value: 46.455
- type: mrr_at_100
value: 47.378
- type: mrr_at_1000
value: 47.394999999999996
- type: mrr_at_3
value: 42.367
- type: mrr_at_5
value: 44.972
- type: ndcg_at_1
value: 33.800000000000004
- type: ndcg_at_10
value: 28.907
- type: ndcg_at_100
value: 39.695
- type: ndcg_at_1000
value: 44.582
- type: ndcg_at_3
value: 26.949
- type: ndcg_at_5
value: 23.988
- type: precision_at_1
value: 33.800000000000004
- type: precision_at_10
value: 15.079999999999998
- type: precision_at_100
value: 3.056
- type: precision_at_1000
value: 0.42100000000000004
- type: precision_at_3
value: 25.167
- type: precision_at_5
value: 21.26
- type: recall_at_1
value: 6.873
- type: recall_at_10
value: 30.568
- type: recall_at_100
value: 62.062
- type: recall_at_1000
value: 85.37700000000001
- type: recall_at_3
value: 15.312999999999999
- type: recall_at_5
value: 21.575
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 82.37009118256057
- type: cos_sim_spearman
value: 79.27986395671529
- type: euclidean_pearson
value: 79.18037715442115
- type: euclidean_spearman
value: 79.28004791561621
- type: manhattan_pearson
value: 79.34062972800541
- type: manhattan_spearman
value: 79.43106695543402
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 87.48474767383833
- type: cos_sim_spearman
value: 79.54505388752513
- type: euclidean_pearson
value: 83.43282704179565
- type: euclidean_spearman
value: 79.54579919925405
- type: manhattan_pearson
value: 83.77564492427952
- type: manhattan_spearman
value: 79.84558396989286
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 88.803698035802
- type: cos_sim_spearman
value: 88.83451367754881
- type: euclidean_pearson
value: 88.28939285711628
- type: euclidean_spearman
value: 88.83528996073112
- type: manhattan_pearson
value: 88.28017412671795
- type: manhattan_spearman
value: 88.9228828016344
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 85.27469288153428
- type: cos_sim_spearman
value: 83.87477064876288
- type: euclidean_pearson
value: 84.2601737035379
- type: euclidean_spearman
value: 83.87431082479074
- type: manhattan_pearson
value: 84.3621547772745
- type: manhattan_spearman
value: 84.12094375000423
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 88.12749863201587
- type: cos_sim_spearman
value: 88.54287568368565
- type: euclidean_pearson
value: 87.90429700607999
- type: euclidean_spearman
value: 88.5437689576261
- type: manhattan_pearson
value: 88.19276653356833
- type: manhattan_spearman
value: 88.99995393814679
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 85.68398747560902
- type: cos_sim_spearman
value: 86.48815303460574
- type: euclidean_pearson
value: 85.52356631237954
- type: euclidean_spearman
value: 86.486391949551
- type: manhattan_pearson
value: 85.67267981761788
- type: manhattan_spearman
value: 86.7073696332485
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 88.9057107443124
- type: cos_sim_spearman
value: 88.7312168757697
- type: euclidean_pearson
value: 88.72810439714794
- type: euclidean_spearman
value: 88.71976185854771
- type: manhattan_pearson
value: 88.50433745949111
- type: manhattan_spearman
value: 88.51726175544195
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 67.59391795109886
- type: cos_sim_spearman
value: 66.87613008631367
- type: euclidean_pearson
value: 69.23198488262217
- type: euclidean_spearman
value: 66.85427723013692
- type: manhattan_pearson
value: 69.50730124841084
- type: manhattan_spearman
value: 67.10404669820792
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 87.0820605344619
- type: cos_sim_spearman
value: 86.8518089863434
- type: euclidean_pearson
value: 86.31087134689284
- type: euclidean_spearman
value: 86.8518520517941
- type: manhattan_pearson
value: 86.47203796160612
- type: manhattan_spearman
value: 87.1080149734421
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 89.09255369305481
- type: mrr
value: 97.10323445617563
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 61.260999999999996
- type: map_at_10
value: 74.043
- type: map_at_100
value: 74.37700000000001
- type: map_at_1000
value: 74.384
- type: map_at_3
value: 71.222
- type: map_at_5
value: 72.875
- type: mrr_at_1
value: 64.333
- type: mrr_at_10
value: 74.984
- type: mrr_at_100
value: 75.247
- type: mrr_at_1000
value: 75.25500000000001
- type: mrr_at_3
value: 73.167
- type: mrr_at_5
value: 74.35000000000001
- type: ndcg_at_1
value: 64.333
- type: ndcg_at_10
value: 79.06
- type: ndcg_at_100
value: 80.416
- type: ndcg_at_1000
value: 80.55600000000001
- type: ndcg_at_3
value: 74.753
- type: ndcg_at_5
value: 76.97500000000001
- type: precision_at_1
value: 64.333
- type: precision_at_10
value: 10.567
- type: precision_at_100
value: 1.1199999999999999
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 29.889
- type: precision_at_5
value: 19.533
- type: recall_at_1
value: 61.260999999999996
- type: recall_at_10
value: 93.167
- type: recall_at_100
value: 99.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 81.667
- type: recall_at_5
value: 87.394
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.71980198019801
- type: cos_sim_ap
value: 92.81616007802704
- type: cos_sim_f1
value: 85.17548454688318
- type: cos_sim_precision
value: 89.43894389438944
- type: cos_sim_recall
value: 81.3
- type: dot_accuracy
value: 99.71980198019801
- type: dot_ap
value: 92.81398760591358
- type: dot_f1
value: 85.17548454688318
- type: dot_precision
value: 89.43894389438944
- type: dot_recall
value: 81.3
- type: euclidean_accuracy
value: 99.71980198019801
- type: euclidean_ap
value: 92.81560637245072
- type: euclidean_f1
value: 85.17548454688318
- type: euclidean_precision
value: 89.43894389438944
- type: euclidean_recall
value: 81.3
- type: manhattan_accuracy
value: 99.73069306930694
- type: manhattan_ap
value: 93.14005487480794
- type: manhattan_f1
value: 85.56263269639068
- type: manhattan_precision
value: 91.17647058823529
- type: manhattan_recall
value: 80.60000000000001
- type: max_accuracy
value: 99.73069306930694
- type: max_ap
value: 93.14005487480794
- type: max_f1
value: 85.56263269639068
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 79.86443362395185
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 49.40897096662564
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 55.66040806627947
- type: mrr
value: 56.58670475766064
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.51015090598575
- type: cos_sim_spearman
value: 31.35016454939226
- type: dot_pearson
value: 31.5150068731
- type: dot_spearman
value: 31.34790869023487
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.254
- type: map_at_10
value: 2.064
- type: map_at_100
value: 12.909
- type: map_at_1000
value: 31.761
- type: map_at_3
value: 0.738
- type: map_at_5
value: 1.155
- type: mrr_at_1
value: 96.0
- type: mrr_at_10
value: 98.0
- type: mrr_at_100
value: 98.0
- type: mrr_at_1000
value: 98.0
- type: mrr_at_3
value: 98.0
- type: mrr_at_5
value: 98.0
- type: ndcg_at_1
value: 93.0
- type: ndcg_at_10
value: 82.258
- type: ndcg_at_100
value: 64.34
- type: ndcg_at_1000
value: 57.912
- type: ndcg_at_3
value: 90.827
- type: ndcg_at_5
value: 86.79
- type: precision_at_1
value: 96.0
- type: precision_at_10
value: 84.8
- type: precision_at_100
value: 66.0
- type: precision_at_1000
value: 25.356
- type: precision_at_3
value: 94.667
- type: precision_at_5
value: 90.4
- type: recall_at_1
value: 0.254
- type: recall_at_10
value: 2.1950000000000003
- type: recall_at_100
value: 16.088
- type: recall_at_1000
value: 54.559000000000005
- type: recall_at_3
value: 0.75
- type: recall_at_5
value: 1.191
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 2.976
- type: map_at_10
value: 11.389000000000001
- type: map_at_100
value: 18.429000000000002
- type: map_at_1000
value: 20.113
- type: map_at_3
value: 6.483
- type: map_at_5
value: 8.770999999999999
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 58.118
- type: mrr_at_100
value: 58.489999999999995
- type: mrr_at_1000
value: 58.489999999999995
- type: mrr_at_3
value: 53.061
- type: mrr_at_5
value: 57.041
- type: ndcg_at_1
value: 40.816
- type: ndcg_at_10
value: 30.567
- type: ndcg_at_100
value: 42.44
- type: ndcg_at_1000
value: 53.480000000000004
- type: ndcg_at_3
value: 36.016
- type: ndcg_at_5
value: 34.257
- type: precision_at_1
value: 42.857
- type: precision_at_10
value: 25.714
- type: precision_at_100
value: 8.429
- type: precision_at_1000
value: 1.5939999999999999
- type: precision_at_3
value: 36.735
- type: precision_at_5
value: 33.878
- type: recall_at_1
value: 2.976
- type: recall_at_10
value: 17.854999999999997
- type: recall_at_100
value: 51.833
- type: recall_at_1000
value: 86.223
- type: recall_at_3
value: 7.887
- type: recall_at_5
value: 12.026
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 85.1174
- type: ap
value: 30.169441069345748
- type: f1
value: 69.79254701873245
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 72.58347481607245
- type: f1
value: 72.74877295564937
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 53.90586138221305
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 87.35769207844072
- type: cos_sim_ap
value: 77.9645072410354
- type: cos_sim_f1
value: 71.32352941176471
- type: cos_sim_precision
value: 66.5903890160183
- type: cos_sim_recall
value: 76.78100263852242
- type: dot_accuracy
value: 87.37557370209214
- type: dot_ap
value: 77.96250046429908
- type: dot_f1
value: 71.28932757557064
- type: dot_precision
value: 66.95249130938586
- type: dot_recall
value: 76.22691292875989
- type: euclidean_accuracy
value: 87.35173153722357
- type: euclidean_ap
value: 77.96520460741593
- type: euclidean_f1
value: 71.32470733210104
- type: euclidean_precision
value: 66.91329479768785
- type: euclidean_recall
value: 76.35883905013192
- type: manhattan_accuracy
value: 87.25636287774931
- type: manhattan_ap
value: 77.77752485611796
- type: manhattan_f1
value: 71.18148599269183
- type: manhattan_precision
value: 66.10859728506787
- type: manhattan_recall
value: 77.0976253298153
- type: max_accuracy
value: 87.37557370209214
- type: max_ap
value: 77.96520460741593
- type: max_f1
value: 71.32470733210104
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.38176737687739
- type: cos_sim_ap
value: 86.58811861657401
- type: cos_sim_f1
value: 79.09430644097604
- type: cos_sim_precision
value: 75.45085977911366
- type: cos_sim_recall
value: 83.10748383122882
- type: dot_accuracy
value: 89.38370784336554
- type: dot_ap
value: 86.58840606004333
- type: dot_f1
value: 79.10179860068133
- type: dot_precision
value: 75.44546153308643
- type: dot_recall
value: 83.13058207576223
- type: euclidean_accuracy
value: 89.38564830985369
- type: euclidean_ap
value: 86.58820721061164
- type: euclidean_f1
value: 79.09070942235888
- type: euclidean_precision
value: 75.38729937194697
- type: euclidean_recall
value: 83.17677856482906
- type: manhattan_accuracy
value: 89.40699344122326
- type: manhattan_ap
value: 86.60631843011362
- type: manhattan_f1
value: 79.14949970570925
- type: manhattan_precision
value: 75.78191039729502
- type: manhattan_recall
value: 82.83030489682784
- type: max_accuracy
value: 89.40699344122326
- type: max_ap
value: 86.60631843011362
- type: max_f1
value: 79.14949970570925
- task:
type: STS
dataset:
name: MTEB AFQMC
type: C-MTEB/AFQMC
config: default
split: validation
revision: b44c3b011063adb25877c13823db83bb193913c4
metrics:
- type: cos_sim_pearson
value: 65.58442135663871
- type: cos_sim_spearman
value: 72.2538631361313
- type: euclidean_pearson
value: 70.97255486607429
- type: euclidean_spearman
value: 72.25374250228647
- type: manhattan_pearson
value: 70.83250199989911
- type: manhattan_spearman
value: 72.14819496536272
- task:
type: STS
dataset:
name: MTEB ATEC
type: C-MTEB/ATEC
config: default
split: test
revision: 0f319b1142f28d00e055a6770f3f726ae9b7d865
metrics:
- type: cos_sim_pearson
value: 59.99478404929932
- type: cos_sim_spearman
value: 62.61836216999812
- type: euclidean_pearson
value: 66.86429811933593
- type: euclidean_spearman
value: 62.6183520374191
- type: manhattan_pearson
value: 66.8063778911633
- type: manhattan_spearman
value: 62.569607573241115
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (zh)
type: mteb/amazon_reviews_multi
config: zh
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 53.98400000000001
- type: f1
value: 51.21447361350723
- task:
type: STS
dataset:
name: MTEB BQ
type: C-MTEB/BQ
config: default
split: test
revision: e3dda5e115e487b39ec7e618c0c6a29137052a55
metrics:
- type: cos_sim_pearson
value: 79.11941660686553
- type: cos_sim_spearman
value: 81.25029594540435
- type: euclidean_pearson
value: 82.06973504238826
- type: euclidean_spearman
value: 81.2501989488524
- type: manhattan_pearson
value: 82.10094630392753
- type: manhattan_spearman
value: 81.27987244392389
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringP2P
type: C-MTEB/CLSClusteringP2P
config: default
split: test
revision: 4b6227591c6c1a73bc76b1055f3b7f3588e72476
metrics:
- type: v_measure
value: 47.07270168705156
- task:
type: Clustering
dataset:
name: MTEB CLSClusteringS2S
type: C-MTEB/CLSClusteringS2S
config: default
split: test
revision: e458b3f5414b62b7f9f83499ac1f5497ae2e869f
metrics:
- type: v_measure
value: 45.98511703185043
- task:
type: Reranking
dataset:
name: MTEB CMedQAv1
type: C-MTEB/CMedQAv1-reranking
config: default
split: test
revision: 8d7f1e942507dac42dc58017c1a001c3717da7df
metrics:
- type: map
value: 88.19895157194931
- type: mrr
value: 90.21424603174603
- task:
type: Reranking
dataset:
name: MTEB CMedQAv2
type: C-MTEB/CMedQAv2-reranking
config: default
split: test
revision: 23d186750531a14a0357ca22cd92d712fd512ea0
metrics:
- type: map
value: 88.03317320980119
- type: mrr
value: 89.9461507936508
- task:
type: Retrieval
dataset:
name: MTEB CmedqaRetrieval
type: C-MTEB/CmedqaRetrieval
config: default
split: dev
revision: cd540c506dae1cf9e9a59c3e06f42030d54e7301
metrics:
- type: map_at_1
value: 29.037000000000003
- type: map_at_10
value: 42.001
- type: map_at_100
value: 43.773
- type: map_at_1000
value: 43.878
- type: map_at_3
value: 37.637
- type: map_at_5
value: 40.034
- type: mrr_at_1
value: 43.136
- type: mrr_at_10
value: 51.158
- type: mrr_at_100
value: 52.083
- type: mrr_at_1000
value: 52.12
- type: mrr_at_3
value: 48.733
- type: mrr_at_5
value: 50.025
- type: ndcg_at_1
value: 43.136
- type: ndcg_at_10
value: 48.685
- type: ndcg_at_100
value: 55.513
- type: ndcg_at_1000
value: 57.242000000000004
- type: ndcg_at_3
value: 43.329
- type: ndcg_at_5
value: 45.438
- type: precision_at_1
value: 43.136
- type: precision_at_10
value: 10.56
- type: precision_at_100
value: 1.6129999999999998
- type: precision_at_1000
value: 0.184
- type: precision_at_3
value: 24.064
- type: precision_at_5
value: 17.269000000000002
- type: recall_at_1
value: 29.037000000000003
- type: recall_at_10
value: 59.245000000000005
- type: recall_at_100
value: 87.355
- type: recall_at_1000
value: 98.74000000000001
- type: recall_at_3
value: 42.99
- type: recall_at_5
value: 49.681999999999995
- task:
type: PairClassification
dataset:
name: MTEB Cmnli
type: C-MTEB/CMNLI
config: default
split: validation
revision: 41bc36f332156f7adc9e38f53777c959b2ae9766
metrics:
- type: cos_sim_accuracy
value: 82.68190018039687
- type: cos_sim_ap
value: 90.18017125327886
- type: cos_sim_f1
value: 83.64080906868193
- type: cos_sim_precision
value: 79.7076890489303
- type: cos_sim_recall
value: 87.98223053542202
- type: dot_accuracy
value: 82.68190018039687
- type: dot_ap
value: 90.18782350103646
- type: dot_f1
value: 83.64242087729039
- type: dot_precision
value: 79.65313028764805
- type: dot_recall
value: 88.05237315875614
- type: euclidean_accuracy
value: 82.68190018039687
- type: euclidean_ap
value: 90.1801957900632
- type: euclidean_f1
value: 83.63636363636364
- type: euclidean_precision
value: 79.52772506852203
- type: euclidean_recall
value: 88.19265840542437
- type: manhattan_accuracy
value: 82.14070956103427
- type: manhattan_ap
value: 89.96178420101427
- type: manhattan_f1
value: 83.21087838578791
- type: manhattan_precision
value: 78.35605121850475
- type: manhattan_recall
value: 88.70703764320785
- type: max_accuracy
value: 82.68190018039687
- type: max_ap
value: 90.18782350103646
- type: max_f1
value: 83.64242087729039
- task:
type: Retrieval
dataset:
name: MTEB CovidRetrieval
type: C-MTEB/CovidRetrieval
config: default
split: dev
revision: 1271c7809071a13532e05f25fb53511ffce77117
metrics:
- type: map_at_1
value: 72.234
- type: map_at_10
value: 80.10000000000001
- type: map_at_100
value: 80.36
- type: map_at_1000
value: 80.363
- type: map_at_3
value: 78.315
- type: map_at_5
value: 79.607
- type: mrr_at_1
value: 72.392
- type: mrr_at_10
value: 80.117
- type: mrr_at_100
value: 80.36999999999999
- type: mrr_at_1000
value: 80.373
- type: mrr_at_3
value: 78.469
- type: mrr_at_5
value: 79.633
- type: ndcg_at_1
value: 72.392
- type: ndcg_at_10
value: 83.651
- type: ndcg_at_100
value: 84.749
- type: ndcg_at_1000
value: 84.83000000000001
- type: ndcg_at_3
value: 80.253
- type: ndcg_at_5
value: 82.485
- type: precision_at_1
value: 72.392
- type: precision_at_10
value: 9.557
- type: precision_at_100
value: 1.004
- type: precision_at_1000
value: 0.101
- type: precision_at_3
value: 28.732000000000003
- type: precision_at_5
value: 18.377
- type: recall_at_1
value: 72.234
- type: recall_at_10
value: 94.573
- type: recall_at_100
value: 99.368
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 85.669
- type: recall_at_5
value: 91.01700000000001
- task:
type: Retrieval
dataset:
name: MTEB DuRetrieval
type: C-MTEB/DuRetrieval
config: default
split: dev
revision: a1a333e290fe30b10f3f56498e3a0d911a693ced
metrics:
- type: map_at_1
value: 26.173999999999996
- type: map_at_10
value: 80.04
- type: map_at_100
value: 82.94500000000001
- type: map_at_1000
value: 82.98100000000001
- type: map_at_3
value: 55.562999999999995
- type: map_at_5
value: 69.89800000000001
- type: mrr_at_1
value: 89.5
- type: mrr_at_10
value: 92.996
- type: mrr_at_100
value: 93.06400000000001
- type: mrr_at_1000
value: 93.065
- type: mrr_at_3
value: 92.658
- type: mrr_at_5
value: 92.84599999999999
- type: ndcg_at_1
value: 89.5
- type: ndcg_at_10
value: 87.443
- type: ndcg_at_100
value: 90.253
- type: ndcg_at_1000
value: 90.549
- type: ndcg_at_3
value: 85.874
- type: ndcg_at_5
value: 84.842
- type: precision_at_1
value: 89.5
- type: precision_at_10
value: 41.805
- type: precision_at_100
value: 4.827
- type: precision_at_1000
value: 0.49
- type: precision_at_3
value: 76.85
- type: precision_at_5
value: 64.8
- type: recall_at_1
value: 26.173999999999996
- type: recall_at_10
value: 89.101
- type: recall_at_100
value: 98.08099999999999
- type: recall_at_1000
value: 99.529
- type: recall_at_3
value: 57.902
- type: recall_at_5
value: 74.602
- task:
type: Retrieval
dataset:
name: MTEB EcomRetrieval
type: C-MTEB/EcomRetrieval
config: default
split: dev
revision: 687de13dc7294d6fd9be10c6945f9e8fec8166b9
metrics:
- type: map_at_1
value: 56.10000000000001
- type: map_at_10
value: 66.15299999999999
- type: map_at_100
value: 66.625
- type: map_at_1000
value: 66.636
- type: map_at_3
value: 63.632999999999996
- type: map_at_5
value: 65.293
- type: mrr_at_1
value: 56.10000000000001
- type: mrr_at_10
value: 66.15299999999999
- type: mrr_at_100
value: 66.625
- type: mrr_at_1000
value: 66.636
- type: mrr_at_3
value: 63.632999999999996
- type: mrr_at_5
value: 65.293
- type: ndcg_at_1
value: 56.10000000000001
- type: ndcg_at_10
value: 71.146
- type: ndcg_at_100
value: 73.27799999999999
- type: ndcg_at_1000
value: 73.529
- type: ndcg_at_3
value: 66.09
- type: ndcg_at_5
value: 69.08999999999999
- type: precision_at_1
value: 56.10000000000001
- type: precision_at_10
value: 8.68
- type: precision_at_100
value: 0.964
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 24.4
- type: precision_at_5
value: 16.1
- type: recall_at_1
value: 56.10000000000001
- type: recall_at_10
value: 86.8
- type: recall_at_100
value: 96.39999999999999
- type: recall_at_1000
value: 98.3
- type: recall_at_3
value: 73.2
- type: recall_at_5
value: 80.5
- task:
type: Classification
dataset:
name: MTEB IFlyTek
type: C-MTEB/IFlyTek-classification
config: default
split: validation
revision: 421605374b29664c5fc098418fe20ada9bd55f8a
metrics:
- type: accuracy
value: 54.52096960369373
- type: f1
value: 40.930845295808695
- task:
type: Classification
dataset:
name: MTEB JDReview
type: C-MTEB/JDReview-classification
config: default
split: test
revision: b7c64bd89eb87f8ded463478346f76731f07bf8b
metrics:
- type: accuracy
value: 86.51031894934334
- type: ap
value: 55.9516014323483
- type: f1
value: 81.54813679326381
- task:
type: STS
dataset:
name: MTEB LCQMC
type: C-MTEB/LCQMC
config: default
split: test
revision: 17f9b096f80380fce5ed12a9be8be7784b337daf
metrics:
- type: cos_sim_pearson
value: 69.67437838574276
- type: cos_sim_spearman
value: 73.81314174653045
- type: euclidean_pearson
value: 72.63430276680275
- type: euclidean_spearman
value: 73.81358736777001
- type: manhattan_pearson
value: 72.58743833842829
- type: manhattan_spearman
value: 73.7590419009179
- task:
type: Reranking
dataset:
name: MTEB MMarcoReranking
type: C-MTEB/Mmarco-reranking
config: default
split: dev
revision: None
metrics:
- type: map
value: 31.648613483640254
- type: mrr
value: 30.37420634920635
- task:
type: Retrieval
dataset:
name: MTEB MMarcoRetrieval
type: C-MTEB/MMarcoRetrieval
config: default
split: dev
revision: 539bbde593d947e2a124ba72651aafc09eb33fc2
metrics:
- type: map_at_1
value: 73.28099999999999
- type: map_at_10
value: 81.977
- type: map_at_100
value: 82.222
- type: map_at_1000
value: 82.22699999999999
- type: map_at_3
value: 80.441
- type: map_at_5
value: 81.46600000000001
- type: mrr_at_1
value: 75.673
- type: mrr_at_10
value: 82.41000000000001
- type: mrr_at_100
value: 82.616
- type: mrr_at_1000
value: 82.621
- type: mrr_at_3
value: 81.094
- type: mrr_at_5
value: 81.962
- type: ndcg_at_1
value: 75.673
- type: ndcg_at_10
value: 85.15599999999999
- type: ndcg_at_100
value: 86.151
- type: ndcg_at_1000
value: 86.26899999999999
- type: ndcg_at_3
value: 82.304
- type: ndcg_at_5
value: 84.009
- type: precision_at_1
value: 75.673
- type: precision_at_10
value: 10.042
- type: precision_at_100
value: 1.052
- type: precision_at_1000
value: 0.106
- type: precision_at_3
value: 30.673000000000002
- type: precision_at_5
value: 19.326999999999998
- type: recall_at_1
value: 73.28099999999999
- type: recall_at_10
value: 94.446
- type: recall_at_100
value: 98.737
- type: recall_at_1000
value: 99.649
- type: recall_at_3
value: 86.984
- type: recall_at_5
value: 91.024
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (zh-CN)
type: mteb/amazon_massive_intent
config: zh-CN
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.08607935440484
- type: f1
value: 78.24879986066307
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (zh-CN)
type: mteb/amazon_massive_scenario
config: zh-CN
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.05917955615332
- type: f1
value: 85.05279279434997
- task:
type: Retrieval
dataset:
name: MTEB MedicalRetrieval
type: C-MTEB/MedicalRetrieval
config: default
split: dev
revision: 2039188fb5800a9803ba5048df7b76e6fb151fc6
metrics:
- type: map_at_1
value: 56.2
- type: map_at_10
value: 62.57899999999999
- type: map_at_100
value: 63.154999999999994
- type: map_at_1000
value: 63.193
- type: map_at_3
value: 61.217
- type: map_at_5
value: 62.012
- type: mrr_at_1
value: 56.3
- type: mrr_at_10
value: 62.629000000000005
- type: mrr_at_100
value: 63.205999999999996
- type: mrr_at_1000
value: 63.244
- type: mrr_at_3
value: 61.267
- type: mrr_at_5
value: 62.062
- type: ndcg_at_1
value: 56.2
- type: ndcg_at_10
value: 65.592
- type: ndcg_at_100
value: 68.657
- type: ndcg_at_1000
value: 69.671
- type: ndcg_at_3
value: 62.808
- type: ndcg_at_5
value: 64.24499999999999
- type: precision_at_1
value: 56.2
- type: precision_at_10
value: 7.5
- type: precision_at_100
value: 0.899
- type: precision_at_1000
value: 0.098
- type: precision_at_3
value: 22.467000000000002
- type: precision_at_5
value: 14.180000000000001
- type: recall_at_1
value: 56.2
- type: recall_at_10
value: 75.0
- type: recall_at_100
value: 89.9
- type: recall_at_1000
value: 97.89999999999999
- type: recall_at_3
value: 67.4
- type: recall_at_5
value: 70.89999999999999
- task:
type: Classification
dataset:
name: MTEB MultilingualSentiment
type: C-MTEB/MultilingualSentiment-classification
config: default
split: validation
revision: 46958b007a63fdbf239b7672c25d0bea67b5ea1a
metrics:
- type: accuracy
value: 76.87666666666667
- type: f1
value: 76.7317686219665
- task:
type: PairClassification
dataset:
name: MTEB Ocnli
type: C-MTEB/OCNLI
config: default
split: validation
revision: 66e76a618a34d6d565d5538088562851e6daa7ec
metrics:
- type: cos_sim_accuracy
value: 79.64266377910124
- type: cos_sim_ap
value: 84.78274442344829
- type: cos_sim_f1
value: 81.16947472745292
- type: cos_sim_precision
value: 76.47058823529412
- type: cos_sim_recall
value: 86.48363252375924
- type: dot_accuracy
value: 79.64266377910124
- type: dot_ap
value: 84.7851404063692
- type: dot_f1
value: 81.16947472745292
- type: dot_precision
value: 76.47058823529412
- type: dot_recall
value: 86.48363252375924
- type: euclidean_accuracy
value: 79.64266377910124
- type: euclidean_ap
value: 84.78068373762378
- type: euclidean_f1
value: 81.14794656110837
- type: euclidean_precision
value: 76.35009310986965
- type: euclidean_recall
value: 86.58922914466737
- type: manhattan_accuracy
value: 79.48023822414727
- type: manhattan_ap
value: 84.72928897427576
- type: manhattan_f1
value: 81.32084770823064
- type: manhattan_precision
value: 76.24768946395564
- type: manhattan_recall
value: 87.11721224920802
- type: max_accuracy
value: 79.64266377910124
- type: max_ap
value: 84.7851404063692
- type: max_f1
value: 81.32084770823064
- task:
type: Classification
dataset:
name: MTEB OnlineShopping
type: C-MTEB/OnlineShopping-classification
config: default
split: test
revision: e610f2ebd179a8fda30ae534c3878750a96db120
metrics:
- type: accuracy
value: 94.3
- type: ap
value: 92.8664032274438
- type: f1
value: 94.29311102997727
- task:
type: STS
dataset:
name: MTEB PAWSX
type: C-MTEB/PAWSX
config: default
split: test
revision: 9c6a90e430ac22b5779fb019a23e820b11a8b5e1
metrics:
- type: cos_sim_pearson
value: 48.51392279882909
- type: cos_sim_spearman
value: 54.06338895994974
- type: euclidean_pearson
value: 52.58480559573412
- type: euclidean_spearman
value: 54.06417276612201
- type: manhattan_pearson
value: 52.69525121721343
- type: manhattan_spearman
value: 54.048147455389675
- task:
type: STS
dataset:
name: MTEB QBQTC
type: C-MTEB/QBQTC
config: default
split: test
revision: 790b0510dc52b1553e8c49f3d2afb48c0e5c48b7
metrics:
- type: cos_sim_pearson
value: 29.728387290757325
- type: cos_sim_spearman
value: 31.366121633635284
- type: euclidean_pearson
value: 29.14588368552961
- type: euclidean_spearman
value: 31.36764411112844
- type: manhattan_pearson
value: 29.63517350523121
- type: manhattan_spearman
value: 31.94157020583762
- task:
type: STS
dataset:
name: MTEB STS22 (zh)
type: mteb/sts22-crosslingual-sts
config: zh
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 63.64868296271406
- type: cos_sim_spearman
value: 66.12800618164744
- type: euclidean_pearson
value: 63.21405767340238
- type: euclidean_spearman
value: 66.12786567790748
- type: manhattan_pearson
value: 64.04300276525848
- type: manhattan_spearman
value: 66.5066857145652
- task:
type: STS
dataset:
name: MTEB STSB
type: C-MTEB/STSB
config: default
split: test
revision: 0cde68302b3541bb8b3c340dc0644b0b745b3dc0
metrics:
- type: cos_sim_pearson
value: 81.2302623912794
- type: cos_sim_spearman
value: 81.16833673266562
- type: euclidean_pearson
value: 79.47647843876024
- type: euclidean_spearman
value: 81.16944349524972
- type: manhattan_pearson
value: 79.84947238492208
- type: manhattan_spearman
value: 81.64626599410026
- task:
type: Reranking
dataset:
name: MTEB T2Reranking
type: C-MTEB/T2Reranking
config: default
split: dev
revision: 76631901a18387f85eaa53e5450019b87ad58ef9
metrics:
- type: map
value: 67.80129586475687
- type: mrr
value: 77.77402311635554
- task:
type: Retrieval
dataset:
name: MTEB T2Retrieval
type: C-MTEB/T2Retrieval
config: default
split: dev
revision: 8731a845f1bf500a4f111cf1070785c793d10e64
metrics:
- type: map_at_1
value: 28.666999999999998
- type: map_at_10
value: 81.063
- type: map_at_100
value: 84.504
- type: map_at_1000
value: 84.552
- type: map_at_3
value: 56.897
- type: map_at_5
value: 70.073
- type: mrr_at_1
value: 92.087
- type: mrr_at_10
value: 94.132
- type: mrr_at_100
value: 94.19800000000001
- type: mrr_at_1000
value: 94.19999999999999
- type: mrr_at_3
value: 93.78999999999999
- type: mrr_at_5
value: 94.002
- type: ndcg_at_1
value: 92.087
- type: ndcg_at_10
value: 87.734
- type: ndcg_at_100
value: 90.736
- type: ndcg_at_1000
value: 91.184
- type: ndcg_at_3
value: 88.78
- type: ndcg_at_5
value: 87.676
- type: precision_at_1
value: 92.087
- type: precision_at_10
value: 43.46
- type: precision_at_100
value: 5.07
- type: precision_at_1000
value: 0.518
- type: precision_at_3
value: 77.49000000000001
- type: precision_at_5
value: 65.194
- type: recall_at_1
value: 28.666999999999998
- type: recall_at_10
value: 86.632
- type: recall_at_100
value: 96.646
- type: recall_at_1000
value: 98.917
- type: recall_at_3
value: 58.333999999999996
- type: recall_at_5
value: 72.974
- task:
type: Classification
dataset:
name: MTEB TNews
type: C-MTEB/TNews-classification
config: default
split: validation
revision: 317f262bf1e6126357bbe89e875451e4b0938fe4
metrics:
- type: accuracy
value: 52.971999999999994
- type: f1
value: 50.2898280984929
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringP2P
type: C-MTEB/ThuNewsClusteringP2P
config: default
split: test
revision: 5798586b105c0434e4f0fe5e767abe619442cf93
metrics:
- type: v_measure
value: 86.0797948663824
- task:
type: Clustering
dataset:
name: MTEB ThuNewsClusteringS2S
type: C-MTEB/ThuNewsClusteringS2S
config: default
split: test
revision: 8a8b2caeda43f39e13c4bc5bea0f8a667896e10d
metrics:
- type: v_measure
value: 85.10759092255017
- task:
type: Retrieval
dataset:
name: MTEB VideoRetrieval
type: C-MTEB/VideoRetrieval
config: default
split: dev
revision: 58c2597a5943a2ba48f4668c3b90d796283c5639
metrics:
- type: map_at_1
value: 65.60000000000001
- type: map_at_10
value: 74.773
- type: map_at_100
value: 75.128
- type: map_at_1000
value: 75.136
- type: map_at_3
value: 73.05
- type: map_at_5
value: 74.13499999999999
- type: mrr_at_1
value: 65.60000000000001
- type: mrr_at_10
value: 74.773
- type: mrr_at_100
value: 75.128
- type: mrr_at_1000
value: 75.136
- type: mrr_at_3
value: 73.05
- type: mrr_at_5
value: 74.13499999999999
- type: ndcg_at_1
value: 65.60000000000001
- type: ndcg_at_10
value: 78.84299999999999
- type: ndcg_at_100
value: 80.40899999999999
- type: ndcg_at_1000
value: 80.57
- type: ndcg_at_3
value: 75.40599999999999
- type: ndcg_at_5
value: 77.351
- type: precision_at_1
value: 65.60000000000001
- type: precision_at_10
value: 9.139999999999999
- type: precision_at_100
value: 0.984
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 27.400000000000002
- type: precision_at_5
value: 17.380000000000003
- type: recall_at_1
value: 65.60000000000001
- type: recall_at_10
value: 91.4
- type: recall_at_100
value: 98.4
- type: recall_at_1000
value: 99.6
- type: recall_at_3
value: 82.19999999999999
- type: recall_at_5
value: 86.9
- task:
type: Classification
dataset:
name: MTEB Waimai
type: C-MTEB/waimai-classification
config: default
split: test
revision: 339287def212450dcaa9df8c22bf93e9980c7023
metrics:
- type: accuracy
value: 89.47
- type: ap
value: 75.59561751845389
- type: f1
value: 87.95207751382563
- task:
type: Clustering
dataset:
name: MTEB AlloProfClusteringP2P
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: v_measure
value: 76.05592323841036
- type: v_measure
value: 64.51718058866508
- task:
type: Reranking
dataset:
name: MTEB AlloprofReranking
type: lyon-nlp/mteb-fr-reranking-alloprof-s2p
config: default
split: test
revision: 666fdacebe0291776e86f29345663dfaf80a0db9
metrics:
- type: map
value: 73.08278490943373
- type: mrr
value: 74.66561454570449
- task:
type: Retrieval
dataset:
name: MTEB AlloprofRetrieval
type: lyon-nlp/alloprof
config: default
split: test
revision: 392ba3f5bcc8c51f578786c1fc3dae648662cb9b
metrics:
- type: map_at_1
value: 38.912
- type: map_at_10
value: 52.437999999999995
- type: map_at_100
value: 53.38
- type: map_at_1000
value: 53.427
- type: map_at_3
value: 48.879
- type: map_at_5
value: 50.934000000000005
- type: mrr_at_1
value: 44.085
- type: mrr_at_10
value: 55.337
- type: mrr_at_100
value: 56.016999999999996
- type: mrr_at_1000
value: 56.043
- type: mrr_at_3
value: 52.55499999999999
- type: mrr_at_5
value: 54.20399999999999
- type: ndcg_at_1
value: 44.085
- type: ndcg_at_10
value: 58.876
- type: ndcg_at_100
value: 62.714000000000006
- type: ndcg_at_1000
value: 63.721000000000004
- type: ndcg_at_3
value: 52.444
- type: ndcg_at_5
value: 55.692
- type: precision_at_1
value: 44.085
- type: precision_at_10
value: 9.21
- type: precision_at_100
value: 1.164
- type: precision_at_1000
value: 0.128
- type: precision_at_3
value: 23.043
- type: precision_at_5
value: 15.898000000000001
- type: recall_at_1
value: 38.912
- type: recall_at_10
value: 75.577
- type: recall_at_100
value: 92.038
- type: recall_at_1000
value: 99.325
- type: recall_at_3
value: 58.592
- type: recall_at_5
value: 66.235
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (fr)
type: mteb/amazon_reviews_multi
config: fr
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 55.532000000000004
- type: f1
value: 52.5783943471605
- task:
type: Retrieval
dataset:
name: MTEB BSARDRetrieval
type: maastrichtlawtech/bsard
config: default
split: test
revision: 5effa1b9b5fa3b0f9e12523e6e43e5f86a6e6d59
metrics:
- type: map_at_1
value: 8.108
- type: map_at_10
value: 14.710999999999999
- type: map_at_100
value: 15.891
- type: map_at_1000
value: 15.983
- type: map_at_3
value: 12.237
- type: map_at_5
value: 13.679
- type: mrr_at_1
value: 8.108
- type: mrr_at_10
value: 14.710999999999999
- type: mrr_at_100
value: 15.891
- type: mrr_at_1000
value: 15.983
- type: mrr_at_3
value: 12.237
- type: mrr_at_5
value: 13.679
- type: ndcg_at_1
value: 8.108
- type: ndcg_at_10
value: 18.796
- type: ndcg_at_100
value: 25.098
- type: ndcg_at_1000
value: 27.951999999999998
- type: ndcg_at_3
value: 13.712
- type: ndcg_at_5
value: 16.309
- type: precision_at_1
value: 8.108
- type: precision_at_10
value: 3.198
- type: precision_at_100
value: 0.626
- type: precision_at_1000
value: 0.086
- type: precision_at_3
value: 6.006
- type: precision_at_5
value: 4.865
- type: recall_at_1
value: 8.108
- type: recall_at_10
value: 31.982
- type: recall_at_100
value: 62.613
- type: recall_at_1000
value: 86.036
- type: recall_at_3
value: 18.018
- type: recall_at_5
value: 24.324
- task:
type: Clustering
dataset:
name: MTEB HALClusteringS2S
type: lyon-nlp/clustering-hal-s2s
config: default
split: test
revision: e06ebbbb123f8144bef1a5d18796f3dec9ae2915
metrics:
- type: v_measure
value: 30.833269778867116
- task:
type: Clustering
dataset:
name: MTEB MLSUMClusteringP2P
type: mlsum
config: default
split: test
revision: b5d54f8f3b61ae17845046286940f03c6bc79bc7
metrics:
- type: v_measure
value: 50.0281928004713
- type: v_measure
value: 43.699961510636534
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (fr)
type: mteb/mtop_domain
config: fr
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 96.68963357344191
- type: f1
value: 96.45175170820961
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (fr)
type: mteb/mtop_intent
config: fr
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 87.46946445349202
- type: f1
value: 65.79860440988624
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 82.60663507109005
- type: f1
value: 77.20462646604777
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (fra)
type: masakhane/masakhanews
config: fra
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 60.19311264967803
- type: v_measure
value: 63.6235764409785
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (fr)
type: mteb/amazon_massive_intent
config: fr
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 81.65097511768661
- type: f1
value: 78.77796091490924
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (fr)
type: mteb/amazon_massive_scenario
config: fr
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 86.64425016812373
- type: f1
value: 85.4912728670017
- task:
type: Retrieval
dataset:
name: MTEB MintakaRetrieval (fr)
type: jinaai/mintakaqa
config: fr
split: test
revision: efa78cc2f74bbcd21eff2261f9e13aebe40b814e
metrics:
- type: map_at_1
value: 35.913000000000004
- type: map_at_10
value: 48.147
- type: map_at_100
value: 48.91
- type: map_at_1000
value: 48.949
- type: map_at_3
value: 45.269999999999996
- type: map_at_5
value: 47.115
- type: mrr_at_1
value: 35.913000000000004
- type: mrr_at_10
value: 48.147
- type: mrr_at_100
value: 48.91
- type: mrr_at_1000
value: 48.949
- type: mrr_at_3
value: 45.269999999999996
- type: mrr_at_5
value: 47.115
- type: ndcg_at_1
value: 35.913000000000004
- type: ndcg_at_10
value: 54.03
- type: ndcg_at_100
value: 57.839
- type: ndcg_at_1000
value: 58.925000000000004
- type: ndcg_at_3
value: 48.217999999999996
- type: ndcg_at_5
value: 51.56699999999999
- type: precision_at_1
value: 35.913000000000004
- type: precision_at_10
value: 7.244000000000001
- type: precision_at_100
value: 0.9039999999999999
- type: precision_at_1000
value: 0.099
- type: precision_at_3
value: 18.905
- type: precision_at_5
value: 12.981000000000002
- type: recall_at_1
value: 35.913000000000004
- type: recall_at_10
value: 72.441
- type: recall_at_100
value: 90.41799999999999
- type: recall_at_1000
value: 99.099
- type: recall_at_3
value: 56.716
- type: recall_at_5
value: 64.90599999999999
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (fr)
type: GEM/opusparcus
config: fr
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.90069513406156
- type: cos_sim_ap
value: 100.0
- type: cos_sim_f1
value: 99.95032290114257
- type: cos_sim_precision
value: 100.0
- type: cos_sim_recall
value: 99.90069513406156
- type: dot_accuracy
value: 99.90069513406156
- type: dot_ap
value: 100.0
- type: dot_f1
value: 99.95032290114257
- type: dot_precision
value: 100.0
- type: dot_recall
value: 99.90069513406156
- type: euclidean_accuracy
value: 99.90069513406156
- type: euclidean_ap
value: 100.0
- type: euclidean_f1
value: 99.95032290114257
- type: euclidean_precision
value: 100.0
- type: euclidean_recall
value: 99.90069513406156
- type: manhattan_accuracy
value: 99.90069513406156
- type: manhattan_ap
value: 100.0
- type: manhattan_f1
value: 99.95032290114257
- type: manhattan_precision
value: 100.0
- type: manhattan_recall
value: 99.90069513406156
- type: max_accuracy
value: 99.90069513406156
- type: max_ap
value: 100.0
- type: max_f1
value: 99.95032290114257
- task:
type: PairClassification
dataset:
name: MTEB PawsX (fr)
type: paws-x
config: fr
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 75.25
- type: cos_sim_ap
value: 80.86376001270014
- type: cos_sim_f1
value: 73.65945437441204
- type: cos_sim_precision
value: 64.02289452166802
- type: cos_sim_recall
value: 86.71096345514951
- type: dot_accuracy
value: 75.25
- type: dot_ap
value: 80.93686107633002
- type: dot_f1
value: 73.65945437441204
- type: dot_precision
value: 64.02289452166802
- type: dot_recall
value: 86.71096345514951
- type: euclidean_accuracy
value: 75.25
- type: euclidean_ap
value: 80.86379136218862
- type: euclidean_f1
value: 73.65945437441204
- type: euclidean_precision
value: 64.02289452166802
- type: euclidean_recall
value: 86.71096345514951
- type: manhattan_accuracy
value: 75.3
- type: manhattan_ap
value: 80.87826606097734
- type: manhattan_f1
value: 73.68421052631581
- type: manhattan_precision
value: 64.0
- type: manhattan_recall
value: 86.82170542635659
- type: max_accuracy
value: 75.3
- type: max_ap
value: 80.93686107633002
- type: max_f1
value: 73.68421052631581
- task:
type: STS
dataset:
name: MTEB SICKFr
type: Lajavaness/SICK-fr
config: default
split: test
revision: e077ab4cf4774a1e36d86d593b150422fafd8e8a
metrics:
- type: cos_sim_pearson
value: 81.42349425981143
- type: cos_sim_spearman
value: 78.90454327031226
- type: euclidean_pearson
value: 78.39086497435166
- type: euclidean_spearman
value: 78.9046133980509
- type: manhattan_pearson
value: 78.63743094286502
- type: manhattan_spearman
value: 79.12136348449269
- task:
type: STS
dataset:
name: MTEB STS22 (fr)
type: mteb/sts22-crosslingual-sts
config: fr
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 81.452697919749
- type: cos_sim_spearman
value: 82.58116836039301
- type: euclidean_pearson
value: 81.04038478932786
- type: euclidean_spearman
value: 82.58116836039301
- type: manhattan_pearson
value: 81.37075396187771
- type: manhattan_spearman
value: 82.73678231355368
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (fr)
type: stsb_multi_mt
config: fr
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 85.7419764013806
- type: cos_sim_spearman
value: 85.46085808849622
- type: euclidean_pearson
value: 83.70449639870063
- type: euclidean_spearman
value: 85.46159013076233
- type: manhattan_pearson
value: 83.95259510313929
- type: manhattan_spearman
value: 85.8029724659458
- task:
type: Summarization
dataset:
name: MTEB SummEvalFr
type: lyon-nlp/summarization-summeval-fr-p2p
config: default
split: test
revision: b385812de6a9577b6f4d0f88c6a6e35395a94054
metrics:
- type: cos_sim_pearson
value: 32.61063271753325
- type: cos_sim_spearman
value: 31.454589417353603
- type: dot_pearson
value: 32.6106288643431
- type: dot_spearman
value: 31.454589417353603
- task:
type: Reranking
dataset:
name: MTEB SyntecReranking
type: lyon-nlp/mteb-fr-reranking-syntec-s2p
config: default
split: test
revision: b205c5084a0934ce8af14338bf03feb19499c84d
metrics:
- type: map
value: 84.31666666666666
- type: mrr
value: 84.31666666666666
- task:
type: Retrieval
dataset:
name: MTEB SyntecRetrieval
type: lyon-nlp/mteb-fr-retrieval-syntec-s2p
config: default
split: test
revision: 77f7e271bf4a92b24fce5119f3486b583ca016ff
metrics:
- type: map_at_1
value: 63.0
- type: map_at_10
value: 73.471
- type: map_at_100
value: 73.87
- type: map_at_1000
value: 73.87
- type: map_at_3
value: 70.5
- type: map_at_5
value: 73.05
- type: mrr_at_1
value: 63.0
- type: mrr_at_10
value: 73.471
- type: mrr_at_100
value: 73.87
- type: mrr_at_1000
value: 73.87
- type: mrr_at_3
value: 70.5
- type: mrr_at_5
value: 73.05
- type: ndcg_at_1
value: 63.0
- type: ndcg_at_10
value: 78.255
- type: ndcg_at_100
value: 79.88
- type: ndcg_at_1000
value: 79.88
- type: ndcg_at_3
value: 72.702
- type: ndcg_at_5
value: 77.264
- type: precision_at_1
value: 63.0
- type: precision_at_10
value: 9.3
- type: precision_at_100
value: 1.0
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 26.333000000000002
- type: precision_at_5
value: 18.0
- type: recall_at_1
value: 63.0
- type: recall_at_10
value: 93.0
- type: recall_at_100
value: 100.0
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 79.0
- type: recall_at_5
value: 90.0
- task:
type: Retrieval
dataset:
name: MTEB XPQARetrieval (fr)
type: jinaai/xpqa
config: fr
split: test
revision: c99d599f0a6ab9b85b065da6f9d94f9cf731679f
metrics:
- type: map_at_1
value: 40.338
- type: map_at_10
value: 61.927
- type: map_at_100
value: 63.361999999999995
- type: map_at_1000
value: 63.405
- type: map_at_3
value: 55.479
- type: map_at_5
value: 59.732
- type: mrr_at_1
value: 63.551
- type: mrr_at_10
value: 71.006
- type: mrr_at_100
value: 71.501
- type: mrr_at_1000
value: 71.509
- type: mrr_at_3
value: 69.07
- type: mrr_at_5
value: 70.165
- type: ndcg_at_1
value: 63.551
- type: ndcg_at_10
value: 68.297
- type: ndcg_at_100
value: 73.13199999999999
- type: ndcg_at_1000
value: 73.751
- type: ndcg_at_3
value: 62.999
- type: ndcg_at_5
value: 64.89
- type: precision_at_1
value: 63.551
- type: precision_at_10
value: 15.661
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.207
- type: precision_at_3
value: 38.273
- type: precision_at_5
value: 27.61
- type: recall_at_1
value: 40.338
- type: recall_at_10
value: 77.267
- type: recall_at_100
value: 95.892
- type: recall_at_1000
value: 99.75500000000001
- type: recall_at_3
value: 60.36
- type: recall_at_5
value: 68.825
- task:
type: Clustering
dataset:
name: MTEB 8TagsClustering
type: PL-MTEB/8tags-clustering
config: default
split: test
revision: None
metrics:
- type: v_measure
value: 51.36126303874126
- task:
type: Classification
dataset:
name: MTEB AllegroReviews
type: PL-MTEB/allegro-reviews
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 67.13717693836979
- type: f1
value: 57.27609848003782
- task:
type: Retrieval
dataset:
name: MTEB ArguAna-PL
type: clarin-knext/arguana-pl
config: default
split: test
revision: 63fc86750af76253e8c760fc9e534bbf24d260a2
metrics:
- type: map_at_1
value: 35.276999999999994
- type: map_at_10
value: 51.086
- type: map_at_100
value: 51.788000000000004
- type: map_at_1000
value: 51.791
- type: map_at_3
value: 46.147
- type: map_at_5
value: 49.078
- type: mrr_at_1
value: 35.917
- type: mrr_at_10
value: 51.315999999999995
- type: mrr_at_100
value: 52.018
- type: mrr_at_1000
value: 52.022
- type: mrr_at_3
value: 46.349000000000004
- type: mrr_at_5
value: 49.297000000000004
- type: ndcg_at_1
value: 35.276999999999994
- type: ndcg_at_10
value: 59.870999999999995
- type: ndcg_at_100
value: 62.590999999999994
- type: ndcg_at_1000
value: 62.661
- type: ndcg_at_3
value: 49.745
- type: ndcg_at_5
value: 55.067
- type: precision_at_1
value: 35.276999999999994
- type: precision_at_10
value: 8.791
- type: precision_at_100
value: 0.991
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 20.057
- type: precision_at_5
value: 14.637
- type: recall_at_1
value: 35.276999999999994
- type: recall_at_10
value: 87.909
- type: recall_at_100
value: 99.14699999999999
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 60.171
- type: recall_at_5
value: 73.18599999999999
- task:
type: Classification
dataset:
name: MTEB CBD
type: PL-MTEB/cbd
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 78.03000000000002
- type: ap
value: 29.12548553897622
- type: f1
value: 66.54857118886073
- task:
type: PairClassification
dataset:
name: MTEB CDSC-E
type: PL-MTEB/cdsce-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 89.0
- type: cos_sim_ap
value: 76.75437826834582
- type: cos_sim_f1
value: 66.4850136239782
- type: cos_sim_precision
value: 68.92655367231639
- type: cos_sim_recall
value: 64.21052631578948
- type: dot_accuracy
value: 89.0
- type: dot_ap
value: 76.75437826834582
- type: dot_f1
value: 66.4850136239782
- type: dot_precision
value: 68.92655367231639
- type: dot_recall
value: 64.21052631578948
- type: euclidean_accuracy
value: 89.0
- type: euclidean_ap
value: 76.75437826834582
- type: euclidean_f1
value: 66.4850136239782
- type: euclidean_precision
value: 68.92655367231639
- type: euclidean_recall
value: 64.21052631578948
- type: manhattan_accuracy
value: 89.0
- type: manhattan_ap
value: 76.66074220647083
- type: manhattan_f1
value: 66.47058823529412
- type: manhattan_precision
value: 75.33333333333333
- type: manhattan_recall
value: 59.473684210526315
- type: max_accuracy
value: 89.0
- type: max_ap
value: 76.75437826834582
- type: max_f1
value: 66.4850136239782
- task:
type: STS
dataset:
name: MTEB CDSC-R
type: PL-MTEB/cdscr-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 93.12903172428328
- type: cos_sim_spearman
value: 92.66381487060741
- type: euclidean_pearson
value: 90.37278396708922
- type: euclidean_spearman
value: 92.66381487060741
- type: manhattan_pearson
value: 90.32503296540962
- type: manhattan_spearman
value: 92.6902938354313
- task:
type: Retrieval
dataset:
name: MTEB DBPedia-PL
type: clarin-knext/dbpedia-pl
config: default
split: test
revision: 76afe41d9af165cc40999fcaa92312b8b012064a
metrics:
- type: map_at_1
value: 8.83
- type: map_at_10
value: 18.326
- type: map_at_100
value: 26.496
- type: map_at_1000
value: 28.455000000000002
- type: map_at_3
value: 12.933
- type: map_at_5
value: 15.168000000000001
- type: mrr_at_1
value: 66.0
- type: mrr_at_10
value: 72.76700000000001
- type: mrr_at_100
value: 73.203
- type: mrr_at_1000
value: 73.219
- type: mrr_at_3
value: 71.458
- type: mrr_at_5
value: 72.246
- type: ndcg_at_1
value: 55.375
- type: ndcg_at_10
value: 41.3
- type: ndcg_at_100
value: 45.891
- type: ndcg_at_1000
value: 52.905
- type: ndcg_at_3
value: 46.472
- type: ndcg_at_5
value: 43.734
- type: precision_at_1
value: 66.0
- type: precision_at_10
value: 33.074999999999996
- type: precision_at_100
value: 11.094999999999999
- type: precision_at_1000
value: 2.374
- type: precision_at_3
value: 48.583
- type: precision_at_5
value: 42.0
- type: recall_at_1
value: 8.83
- type: recall_at_10
value: 22.587
- type: recall_at_100
value: 50.61600000000001
- type: recall_at_1000
value: 73.559
- type: recall_at_3
value: 13.688
- type: recall_at_5
value: 16.855
- task:
type: Retrieval
dataset:
name: MTEB FiQA-PL
type: clarin-knext/fiqa-pl
config: default
split: test
revision: 2e535829717f8bf9dc829b7f911cc5bbd4e6608e
metrics:
- type: map_at_1
value: 20.587
- type: map_at_10
value: 33.095
- type: map_at_100
value: 35.24
- type: map_at_1000
value: 35.429
- type: map_at_3
value: 28.626
- type: map_at_5
value: 31.136999999999997
- type: mrr_at_1
value: 40.586
- type: mrr_at_10
value: 49.033
- type: mrr_at_100
value: 49.952999999999996
- type: mrr_at_1000
value: 49.992
- type: mrr_at_3
value: 46.553
- type: mrr_at_5
value: 48.035
- type: ndcg_at_1
value: 40.586
- type: ndcg_at_10
value: 41.046
- type: ndcg_at_100
value: 48.586
- type: ndcg_at_1000
value: 51.634
- type: ndcg_at_3
value: 36.773
- type: ndcg_at_5
value: 38.389
- type: precision_at_1
value: 40.586
- type: precision_at_10
value: 11.466
- type: precision_at_100
value: 1.909
- type: precision_at_1000
value: 0.245
- type: precision_at_3
value: 24.434
- type: precision_at_5
value: 18.426000000000002
- type: recall_at_1
value: 20.587
- type: recall_at_10
value: 47.986000000000004
- type: recall_at_100
value: 75.761
- type: recall_at_1000
value: 94.065
- type: recall_at_3
value: 33.339
- type: recall_at_5
value: 39.765
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA-PL
type: clarin-knext/hotpotqa-pl
config: default
split: test
revision: a0bd479ac97b4ccb5bd6ce320c415d0bb4beb907
metrics:
- type: map_at_1
value: 40.878
- type: map_at_10
value: 58.775999999999996
- type: map_at_100
value: 59.632
- type: map_at_1000
value: 59.707
- type: map_at_3
value: 56.074
- type: map_at_5
value: 57.629
- type: mrr_at_1
value: 81.756
- type: mrr_at_10
value: 86.117
- type: mrr_at_100
value: 86.299
- type: mrr_at_1000
value: 86.30600000000001
- type: mrr_at_3
value: 85.345
- type: mrr_at_5
value: 85.832
- type: ndcg_at_1
value: 81.756
- type: ndcg_at_10
value: 67.608
- type: ndcg_at_100
value: 70.575
- type: ndcg_at_1000
value: 71.99600000000001
- type: ndcg_at_3
value: 63.723
- type: ndcg_at_5
value: 65.70700000000001
- type: precision_at_1
value: 81.756
- type: precision_at_10
value: 13.619
- type: precision_at_100
value: 1.5939999999999999
- type: precision_at_1000
value: 0.178
- type: precision_at_3
value: 39.604
- type: precision_at_5
value: 25.332
- type: recall_at_1
value: 40.878
- type: recall_at_10
value: 68.096
- type: recall_at_100
value: 79.696
- type: recall_at_1000
value: 89.082
- type: recall_at_3
value: 59.406000000000006
- type: recall_at_5
value: 63.329
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO-PL
type: clarin-knext/msmarco-pl
config: default
split: test
revision: 8634c07806d5cce3a6138e260e59b81760a0a640
metrics:
- type: map_at_1
value: 2.1839999999999997
- type: map_at_10
value: 11.346
- type: map_at_100
value: 30.325000000000003
- type: map_at_1000
value: 37.806
- type: map_at_3
value: 4.842
- type: map_at_5
value: 6.891
- type: mrr_at_1
value: 86.047
- type: mrr_at_10
value: 89.14699999999999
- type: mrr_at_100
value: 89.46600000000001
- type: mrr_at_1000
value: 89.46600000000001
- type: mrr_at_3
value: 89.14699999999999
- type: mrr_at_5
value: 89.14699999999999
- type: ndcg_at_1
value: 67.829
- type: ndcg_at_10
value: 62.222
- type: ndcg_at_100
value: 55.337
- type: ndcg_at_1000
value: 64.076
- type: ndcg_at_3
value: 68.12700000000001
- type: ndcg_at_5
value: 64.987
- type: precision_at_1
value: 86.047
- type: precision_at_10
value: 69.535
- type: precision_at_100
value: 32.93
- type: precision_at_1000
value: 6.6049999999999995
- type: precision_at_3
value: 79.845
- type: precision_at_5
value: 75.349
- type: recall_at_1
value: 2.1839999999999997
- type: recall_at_10
value: 12.866
- type: recall_at_100
value: 43.505
- type: recall_at_1000
value: 72.366
- type: recall_at_3
value: 4.947
- type: recall_at_5
value: 7.192
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (pl)
type: mteb/amazon_massive_intent
config: pl
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 80.75319435104238
- type: f1
value: 77.58961444860606
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (pl)
type: mteb/amazon_massive_scenario
config: pl
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 85.54472091459313
- type: f1
value: 84.29498563572106
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus-PL
type: clarin-knext/nfcorpus-pl
config: default
split: test
revision: 9a6f9567fda928260afed2de480d79c98bf0bec0
metrics:
- type: map_at_1
value: 4.367
- type: map_at_10
value: 10.38
- type: map_at_100
value: 13.516
- type: map_at_1000
value: 14.982000000000001
- type: map_at_3
value: 7.367
- type: map_at_5
value: 8.59
- type: mrr_at_1
value: 41.486000000000004
- type: mrr_at_10
value: 48.886
- type: mrr_at_100
value: 49.657000000000004
- type: mrr_at_1000
value: 49.713
- type: mrr_at_3
value: 46.904
- type: mrr_at_5
value: 48.065000000000005
- type: ndcg_at_1
value: 40.402
- type: ndcg_at_10
value: 30.885
- type: ndcg_at_100
value: 28.393
- type: ndcg_at_1000
value: 37.428
- type: ndcg_at_3
value: 35.394999999999996
- type: ndcg_at_5
value: 33.391999999999996
- type: precision_at_1
value: 41.486000000000004
- type: precision_at_10
value: 23.437
- type: precision_at_100
value: 7.638
- type: precision_at_1000
value: 2.0389999999999997
- type: precision_at_3
value: 32.817
- type: precision_at_5
value: 28.915999999999997
- type: recall_at_1
value: 4.367
- type: recall_at_10
value: 14.655000000000001
- type: recall_at_100
value: 29.665999999999997
- type: recall_at_1000
value: 62.073
- type: recall_at_3
value: 8.51
- type: recall_at_5
value: 10.689
- task:
type: Retrieval
dataset:
name: MTEB NQ-PL
type: clarin-knext/nq-pl
config: default
split: test
revision: f171245712cf85dd4700b06bef18001578d0ca8d
metrics:
- type: map_at_1
value: 28.616000000000003
- type: map_at_10
value: 41.626000000000005
- type: map_at_100
value: 42.689
- type: map_at_1000
value: 42.733
- type: map_at_3
value: 37.729
- type: map_at_5
value: 39.879999999999995
- type: mrr_at_1
value: 32.068000000000005
- type: mrr_at_10
value: 44.029
- type: mrr_at_100
value: 44.87
- type: mrr_at_1000
value: 44.901
- type: mrr_at_3
value: 40.687
- type: mrr_at_5
value: 42.625
- type: ndcg_at_1
value: 32.068000000000005
- type: ndcg_at_10
value: 48.449999999999996
- type: ndcg_at_100
value: 53.13
- type: ndcg_at_1000
value: 54.186
- type: ndcg_at_3
value: 40.983999999999995
- type: ndcg_at_5
value: 44.628
- type: precision_at_1
value: 32.068000000000005
- type: precision_at_10
value: 7.9750000000000005
- type: precision_at_100
value: 1.061
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 18.404999999999998
- type: precision_at_5
value: 13.111
- type: recall_at_1
value: 28.616000000000003
- type: recall_at_10
value: 66.956
- type: recall_at_100
value: 87.657
- type: recall_at_1000
value: 95.548
- type: recall_at_3
value: 47.453
- type: recall_at_5
value: 55.87800000000001
- task:
type: Classification
dataset:
name: MTEB PAC
type: laugustyniak/abusive-clauses-pl
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 69.04141326382856
- type: ap
value: 77.47589122111044
- type: f1
value: 66.6332277374775
- task:
type: PairClassification
dataset:
name: MTEB PPC
type: PL-MTEB/ppc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.4
- type: cos_sim_ap
value: 94.1044939667201
- type: cos_sim_f1
value: 88.78048780487805
- type: cos_sim_precision
value: 87.22044728434504
- type: cos_sim_recall
value: 90.39735099337747
- type: dot_accuracy
value: 86.4
- type: dot_ap
value: 94.1044939667201
- type: dot_f1
value: 88.78048780487805
- type: dot_precision
value: 87.22044728434504
- type: dot_recall
value: 90.39735099337747
- type: euclidean_accuracy
value: 86.4
- type: euclidean_ap
value: 94.1044939667201
- type: euclidean_f1
value: 88.78048780487805
- type: euclidean_precision
value: 87.22044728434504
- type: euclidean_recall
value: 90.39735099337747
- type: manhattan_accuracy
value: 86.4
- type: manhattan_ap
value: 94.11438365697387
- type: manhattan_f1
value: 88.77968877968877
- type: manhattan_precision
value: 87.84440842787681
- type: manhattan_recall
value: 89.73509933774835
- type: max_accuracy
value: 86.4
- type: max_ap
value: 94.11438365697387
- type: max_f1
value: 88.78048780487805
- task:
type: PairClassification
dataset:
name: MTEB PSC
type: PL-MTEB/psc-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 97.86641929499072
- type: cos_sim_ap
value: 99.36904211868182
- type: cos_sim_f1
value: 96.56203288490283
- type: cos_sim_precision
value: 94.72140762463343
- type: cos_sim_recall
value: 98.47560975609755
- type: dot_accuracy
value: 97.86641929499072
- type: dot_ap
value: 99.36904211868183
- type: dot_f1
value: 96.56203288490283
- type: dot_precision
value: 94.72140762463343
- type: dot_recall
value: 98.47560975609755
- type: euclidean_accuracy
value: 97.86641929499072
- type: euclidean_ap
value: 99.36904211868183
- type: euclidean_f1
value: 96.56203288490283
- type: euclidean_precision
value: 94.72140762463343
- type: euclidean_recall
value: 98.47560975609755
- type: manhattan_accuracy
value: 98.14471243042672
- type: manhattan_ap
value: 99.43359540492416
- type: manhattan_f1
value: 96.98795180722892
- type: manhattan_precision
value: 95.83333333333334
- type: manhattan_recall
value: 98.17073170731707
- type: max_accuracy
value: 98.14471243042672
- type: max_ap
value: 99.43359540492416
- type: max_f1
value: 96.98795180722892
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-IN
type: PL-MTEB/polemo2_in
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 89.39058171745152
- type: f1
value: 86.8552093529568
- task:
type: Classification
dataset:
name: MTEB PolEmo2.0-OUT
type: PL-MTEB/polemo2_out
config: default
split: test
revision: None
metrics:
- type: accuracy
value: 74.97975708502024
- type: f1
value: 58.73081628832407
- task:
type: Retrieval
dataset:
name: MTEB Quora-PL
type: clarin-knext/quora-pl
config: default
split: test
revision: 0be27e93455051e531182b85e85e425aba12e9d4
metrics:
- type: map_at_1
value: 64.917
- type: map_at_10
value: 78.74600000000001
- type: map_at_100
value: 79.501
- type: map_at_1000
value: 79.524
- type: map_at_3
value: 75.549
- type: map_at_5
value: 77.495
- type: mrr_at_1
value: 74.9
- type: mrr_at_10
value: 82.112
- type: mrr_at_100
value: 82.314
- type: mrr_at_1000
value: 82.317
- type: mrr_at_3
value: 80.745
- type: mrr_at_5
value: 81.607
- type: ndcg_at_1
value: 74.83999999999999
- type: ndcg_at_10
value: 83.214
- type: ndcg_at_100
value: 84.997
- type: ndcg_at_1000
value: 85.207
- type: ndcg_at_3
value: 79.547
- type: ndcg_at_5
value: 81.46600000000001
- type: precision_at_1
value: 74.83999999999999
- type: precision_at_10
value: 12.822
- type: precision_at_100
value: 1.506
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 34.903
- type: precision_at_5
value: 23.16
- type: recall_at_1
value: 64.917
- type: recall_at_10
value: 92.27199999999999
- type: recall_at_100
value: 98.715
- type: recall_at_1000
value: 99.854
- type: recall_at_3
value: 82.04599999999999
- type: recall_at_5
value: 87.2
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS-PL
type: clarin-knext/scidocs-pl
config: default
split: test
revision: 45452b03f05560207ef19149545f168e596c9337
metrics:
- type: map_at_1
value: 3.51
- type: map_at_10
value: 9.046999999999999
- type: map_at_100
value: 10.823
- type: map_at_1000
value: 11.144
- type: map_at_3
value: 6.257
- type: map_at_5
value: 7.648000000000001
- type: mrr_at_1
value: 17.299999999999997
- type: mrr_at_10
value: 27.419
- type: mrr_at_100
value: 28.618
- type: mrr_at_1000
value: 28.685
- type: mrr_at_3
value: 23.817
- type: mrr_at_5
value: 25.927
- type: ndcg_at_1
value: 17.299999999999997
- type: ndcg_at_10
value: 16.084
- type: ndcg_at_100
value: 23.729
- type: ndcg_at_1000
value: 29.476999999999997
- type: ndcg_at_3
value: 14.327000000000002
- type: ndcg_at_5
value: 13.017999999999999
- type: precision_at_1
value: 17.299999999999997
- type: precision_at_10
value: 8.63
- type: precision_at_100
value: 1.981
- type: precision_at_1000
value: 0.336
- type: precision_at_3
value: 13.4
- type: precision_at_5
value: 11.700000000000001
- type: recall_at_1
value: 3.51
- type: recall_at_10
value: 17.518
- type: recall_at_100
value: 40.275
- type: recall_at_1000
value: 68.203
- type: recall_at_3
value: 8.155
- type: recall_at_5
value: 11.875
- task:
type: PairClassification
dataset:
name: MTEB SICK-E-PL
type: PL-MTEB/sicke-pl-pairclassification
config: default
split: test
revision: None
metrics:
- type: cos_sim_accuracy
value: 86.30248675091724
- type: cos_sim_ap
value: 83.6756734006714
- type: cos_sim_f1
value: 74.97367497367497
- type: cos_sim_precision
value: 73.91003460207612
- type: cos_sim_recall
value: 76.06837606837607
- type: dot_accuracy
value: 86.30248675091724
- type: dot_ap
value: 83.6756734006714
- type: dot_f1
value: 74.97367497367497
- type: dot_precision
value: 73.91003460207612
- type: dot_recall
value: 76.06837606837607
- type: euclidean_accuracy
value: 86.30248675091724
- type: euclidean_ap
value: 83.67566984333091
- type: euclidean_f1
value: 74.97367497367497
- type: euclidean_precision
value: 73.91003460207612
- type: euclidean_recall
value: 76.06837606837607
- type: manhattan_accuracy
value: 86.28210354667753
- type: manhattan_ap
value: 83.64216119130171
- type: manhattan_f1
value: 74.92152075340078
- type: manhattan_precision
value: 73.4107997265892
- type: manhattan_recall
value: 76.49572649572649
- type: max_accuracy
value: 86.30248675091724
- type: max_ap
value: 83.6756734006714
- type: max_f1
value: 74.97367497367497
- task:
type: STS
dataset:
name: MTEB SICK-R-PL
type: PL-MTEB/sickr-pl-sts
config: default
split: test
revision: None
metrics:
- type: cos_sim_pearson
value: 82.23295940859121
- type: cos_sim_spearman
value: 78.89329160768719
- type: euclidean_pearson
value: 79.56019107076818
- type: euclidean_spearman
value: 78.89330209904084
- type: manhattan_pearson
value: 79.76098513973719
- type: manhattan_spearman
value: 79.05490162570123
- task:
type: STS
dataset:
name: MTEB STS22 (pl)
type: mteb/sts22-crosslingual-sts
config: pl
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 37.732606308062486
- type: cos_sim_spearman
value: 41.01645667030284
- type: euclidean_pearson
value: 26.61722556367085
- type: euclidean_spearman
value: 41.01645667030284
- type: manhattan_pearson
value: 26.60917378970807
- type: manhattan_spearman
value: 41.51335727617614
- task:
type: Retrieval
dataset:
name: MTEB SciFact-PL
type: clarin-knext/scifact-pl
config: default
split: test
revision: 47932a35f045ef8ed01ba82bf9ff67f6e109207e
metrics:
- type: map_at_1
value: 54.31700000000001
- type: map_at_10
value: 65.564
- type: map_at_100
value: 66.062
- type: map_at_1000
value: 66.08699999999999
- type: map_at_3
value: 62.592999999999996
- type: map_at_5
value: 63.888
- type: mrr_at_1
value: 56.99999999999999
- type: mrr_at_10
value: 66.412
- type: mrr_at_100
value: 66.85900000000001
- type: mrr_at_1000
value: 66.88
- type: mrr_at_3
value: 64.22200000000001
- type: mrr_at_5
value: 65.206
- type: ndcg_at_1
value: 56.99999999999999
- type: ndcg_at_10
value: 70.577
- type: ndcg_at_100
value: 72.879
- type: ndcg_at_1000
value: 73.45
- type: ndcg_at_3
value: 65.5
- type: ndcg_at_5
value: 67.278
- type: precision_at_1
value: 56.99999999999999
- type: precision_at_10
value: 9.667
- type: precision_at_100
value: 1.083
- type: precision_at_1000
value: 0.11299999999999999
- type: precision_at_3
value: 26.0
- type: precision_at_5
value: 16.933
- type: recall_at_1
value: 54.31700000000001
- type: recall_at_10
value: 85.056
- type: recall_at_100
value: 95.667
- type: recall_at_1000
value: 100.0
- type: recall_at_3
value: 71.0
- type: recall_at_5
value: 75.672
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID-PL
type: clarin-knext/trec-covid-pl
config: default
split: test
revision: 81bcb408f33366c2a20ac54adafad1ae7e877fdd
metrics:
- type: map_at_1
value: 0.245
- type: map_at_10
value: 2.051
- type: map_at_100
value: 12.009
- type: map_at_1000
value: 27.448
- type: map_at_3
value: 0.721
- type: map_at_5
value: 1.13
- type: mrr_at_1
value: 88.0
- type: mrr_at_10
value: 93.0
- type: mrr_at_100
value: 93.0
- type: mrr_at_1000
value: 93.0
- type: mrr_at_3
value: 93.0
- type: mrr_at_5
value: 93.0
- type: ndcg_at_1
value: 85.0
- type: ndcg_at_10
value: 80.303
- type: ndcg_at_100
value: 61.23499999999999
- type: ndcg_at_1000
value: 52.978
- type: ndcg_at_3
value: 84.419
- type: ndcg_at_5
value: 82.976
- type: precision_at_1
value: 88.0
- type: precision_at_10
value: 83.39999999999999
- type: precision_at_100
value: 61.96
- type: precision_at_1000
value: 22.648
- type: precision_at_3
value: 89.333
- type: precision_at_5
value: 87.2
- type: recall_at_1
value: 0.245
- type: recall_at_10
value: 2.193
- type: recall_at_100
value: 14.938
- type: recall_at_1000
value: 48.563
- type: recall_at_3
value: 0.738
- type: recall_at_5
value: 1.173
---
# Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF
This model was converted to GGUF format from [`Alibaba-NLP/gte-Qwen2-7B-instruct`](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Hoshino-Yumetsuki/gte-Qwen2-7B-instruct-Q8_0-GGUF --hf-file gte-qwen2-7b-instruct-q8_0.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
model-attribution-challenge/bloom-2b5 | model-attribution-challenge | text-generation | [
"transformers",
"pytorch",
"bloom",
"feature-extraction",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"license:bigscience-bloom-rail-1.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-08-09T19:38:50 | 2022-09-27T15:58:41 | 31 | 0 | ---
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
license: bigscience-bloom-rail-1.0
pipeline_tag: text-generation
model-index:
- name: bloom
results:
- task:
type: text-generation
name: text generation
dataset:
name: arc_challenge
type: arc_challenge
metrics:
- type: acc
value: 0.27986348122866894
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: arc_easy
type: arc_easy
metrics:
- type: acc
value: 0.5946969696969697
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: axb
type: axb
metrics:
- type: acc
value: 0.4433876811594203
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: axg
type: axg
metrics:
- type: acc
value: 0.5
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: boolq
type: boolq
metrics:
- type: acc
value: 0.6165137614678899
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: cb
type: cb
metrics:
- type: acc
value: 0.30357142857142855
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: cola
type: cola
metrics:
- type: acc
value: 0.610738255033557
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: copa
type: copa
metrics:
- type: acc
value: 0.63
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: crows_pairs_english
type: crows_pairs_english
metrics:
- type: acc
value: 0.4973166368515206
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: crows_pairs_french
type: crows_pairs_french
metrics:
- type: acc
value: 0.5032796660703638
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: diabla
type: diabla
metrics:
- type: acc
value: 0.28888308977035493
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_afr
type: gsarti/flores_101_afr
metrics:
- type: byte_perplexity
value: 6.500798737976343
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_amh
type: gsarti/flores_101_amh
metrics:
- type: byte_perplexity
value: 3.9726863338897145
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ara
type: gsarti/flores_101_ara
metrics:
- type: byte_perplexity
value: 1.8083841089875814
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_asm
type: gsarti/flores_101_asm
metrics:
- type: byte_perplexity
value: 5.699102962086425
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ast
type: gsarti/flores_101_ast
metrics:
- type: byte_perplexity
value: 3.9252047073429384
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_azj
type: gsarti/flores_101_azj
metrics:
- type: byte_perplexity
value: 6.942805054270002
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bel
type: gsarti/flores_101_bel
metrics:
- type: byte_perplexity
value: 3.614136245847082
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ben
type: gsarti/flores_101_ben
metrics:
- type: byte_perplexity
value: 5.121491534300969
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bos
type: gsarti/flores_101_bos
metrics:
- type: byte_perplexity
value: 5.653353469118798
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_bul
type: gsarti/flores_101_bul
metrics:
- type: byte_perplexity
value: 2.7014693938055068
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_cat
type: gsarti/flores_101_cat
metrics:
- type: byte_perplexity
value: 2.305190041967345
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ceb
type: gsarti/flores_101_ceb
metrics:
- type: byte_perplexity
value: 6.291000321323428
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ces
type: gsarti/flores_101_ces
metrics:
- type: byte_perplexity
value: 5.447322753586386
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ckb
type: gsarti/flores_101_ckb
metrics:
- type: byte_perplexity
value: 3.7255124939234765
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_cym
type: gsarti/flores_101_cym
metrics:
- type: byte_perplexity
value: 12.539424151448149
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_dan
type: gsarti/flores_101_dan
metrics:
- type: byte_perplexity
value: 5.183309001005672
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_deu
type: gsarti/flores_101_deu
metrics:
- type: byte_perplexity
value: 3.1180422286591347
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ell
type: gsarti/flores_101_ell
metrics:
- type: byte_perplexity
value: 2.467943456164706
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_eng
type: gsarti/flores_101_eng
metrics:
- type: byte_perplexity
value: 2.018740628193298
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_est
type: gsarti/flores_101_est
metrics:
- type: byte_perplexity
value: 9.11654425176368
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fas
type: gsarti/flores_101_fas
metrics:
- type: byte_perplexity
value: 3.058009097116482
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fin
type: gsarti/flores_101_fin
metrics:
- type: byte_perplexity
value: 6.847047959628553
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_fra
type: gsarti/flores_101_fra
metrics:
- type: byte_perplexity
value: 1.9975177011840075
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ful
type: gsarti/flores_101_ful
metrics:
- type: byte_perplexity
value: 11.465912731488828
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_gle
type: gsarti/flores_101_gle
metrics:
- type: byte_perplexity
value: 8.681491663539422
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_glg
type: gsarti/flores_101_glg
metrics:
- type: byte_perplexity
value: 3.029991089015508
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_guj
type: gsarti/flores_101_guj
metrics:
- type: byte_perplexity
value: 4.955224230286231
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hau
type: gsarti/flores_101_hau
metrics:
- type: byte_perplexity
value: 10.758347356372159
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_heb
type: gsarti/flores_101_heb
metrics:
- type: byte_perplexity
value: 3.6004478129801667
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hin
type: gsarti/flores_101_hin
metrics:
- type: byte_perplexity
value: 4.712530650588064
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hrv
type: gsarti/flores_101_hrv
metrics:
- type: byte_perplexity
value: 5.822418943372185
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hun
type: gsarti/flores_101_hun
metrics:
- type: byte_perplexity
value: 6.440482646965992
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_hye
type: gsarti/flores_101_hye
metrics:
- type: byte_perplexity
value: 3.657718918347166
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ibo
type: gsarti/flores_101_ibo
metrics:
- type: byte_perplexity
value: 5.564814003872672
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ind
type: gsarti/flores_101_ind
metrics:
- type: byte_perplexity
value: 2.1597101468869373
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_isl
type: gsarti/flores_101_isl
metrics:
- type: byte_perplexity
value: 8.082349269518136
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ita
type: gsarti/flores_101_ita
metrics:
- type: byte_perplexity
value: 2.9687591414176207
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_jav
type: gsarti/flores_101_jav
metrics:
- type: byte_perplexity
value: 7.0573805415708994
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_jpn
type: gsarti/flores_101_jpn
metrics:
- type: byte_perplexity
value: 2.7758864197116933
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kam
type: gsarti/flores_101_kam
metrics:
- type: byte_perplexity
value: 11.072949642861332
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kan
type: gsarti/flores_101_kan
metrics:
- type: byte_perplexity
value: 5.551730651007082
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kat
type: gsarti/flores_101_kat
metrics:
- type: byte_perplexity
value: 2.522630524283745
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kaz
type: gsarti/flores_101_kaz
metrics:
- type: byte_perplexity
value: 3.3901748516975574
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kea
type: gsarti/flores_101_kea
metrics:
- type: byte_perplexity
value: 8.918534182590863
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kir
type: gsarti/flores_101_kir
metrics:
- type: byte_perplexity
value: 3.729278369847201
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_kor
type: gsarti/flores_101_kor
metrics:
- type: byte_perplexity
value: 3.932884847226212
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lao
type: gsarti/flores_101_lao
metrics:
- type: byte_perplexity
value: 2.9077314760849924
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lav
type: gsarti/flores_101_lav
metrics:
- type: byte_perplexity
value: 7.777221919194806
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lin
type: gsarti/flores_101_lin
metrics:
- type: byte_perplexity
value: 7.524842908050988
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lit
type: gsarti/flores_101_lit
metrics:
- type: byte_perplexity
value: 7.369179434621725
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ltz
type: gsarti/flores_101_ltz
metrics:
- type: byte_perplexity
value: 8.801059747949214
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_lug
type: gsarti/flores_101_lug
metrics:
- type: byte_perplexity
value: 8.483203026364786
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_luo
type: gsarti/flores_101_luo
metrics:
- type: byte_perplexity
value: 11.975963093623681
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mal
type: gsarti/flores_101_mal
metrics:
- type: byte_perplexity
value: 4.615948455160037
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mar
type: gsarti/flores_101_mar
metrics:
- type: byte_perplexity
value: 5.483253482821379
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mkd
type: gsarti/flores_101_mkd
metrics:
- type: byte_perplexity
value: 2.9656732291754087
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mlt
type: gsarti/flores_101_mlt
metrics:
- type: byte_perplexity
value: 15.004773437665275
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mon
type: gsarti/flores_101_mon
metrics:
- type: byte_perplexity
value: 3.410598542315402
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mri
type: gsarti/flores_101_mri
metrics:
- type: byte_perplexity
value: 7.474035895661322
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_msa
type: gsarti/flores_101_msa
metrics:
- type: byte_perplexity
value: 2.5710001772665634
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_mya
type: gsarti/flores_101_mya
metrics:
- type: byte_perplexity
value: 2.413577969878331
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nld
type: gsarti/flores_101_nld
metrics:
- type: byte_perplexity
value: 4.127831721885065
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nob
type: gsarti/flores_101_nob
metrics:
- type: byte_perplexity
value: 5.402763169129877
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_npi
type: gsarti/flores_101_npi
metrics:
- type: byte_perplexity
value: 5.199342701937889
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nso
type: gsarti/flores_101_nso
metrics:
- type: byte_perplexity
value: 8.154626800955667
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_nya
type: gsarti/flores_101_nya
metrics:
- type: byte_perplexity
value: 8.179860208369393
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_oci
type: gsarti/flores_101_oci
metrics:
- type: byte_perplexity
value: 4.8617357393685845
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_orm
type: gsarti/flores_101_orm
metrics:
- type: byte_perplexity
value: 12.911595421079408
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ory
type: gsarti/flores_101_ory
metrics:
- type: byte_perplexity
value: 5.189421861225964
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pan
type: gsarti/flores_101_pan
metrics:
- type: byte_perplexity
value: 4.698477289331806
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pol
type: gsarti/flores_101_pol
metrics:
- type: byte_perplexity
value: 4.625550458479643
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_por
type: gsarti/flores_101_por
metrics:
- type: byte_perplexity
value: 1.9754515986213523
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_pus
type: gsarti/flores_101_pus
metrics:
- type: byte_perplexity
value: 4.4963371422771585
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ron
type: gsarti/flores_101_ron
metrics:
- type: byte_perplexity
value: 4.965456830031304
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_rus
type: gsarti/flores_101_rus
metrics:
- type: byte_perplexity
value: 2.0498020542445303
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_slk
type: gsarti/flores_101_slk
metrics:
- type: byte_perplexity
value: 6.450822127057479
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_slv
type: gsarti/flores_101_slv
metrics:
- type: byte_perplexity
value: 6.620252120186232
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_sna
type: gsarti/flores_101_sna
metrics:
- type: byte_perplexity
value: 8.462166771382726
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_snd
type: gsarti/flores_101_snd
metrics:
- type: byte_perplexity
value: 5.466066951221973
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_som
type: gsarti/flores_101_som
metrics:
- type: byte_perplexity
value: 11.95918054093392
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_spa
type: gsarti/flores_101_spa
metrics:
- type: byte_perplexity
value: 1.8965140104323535
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_srp
type: gsarti/flores_101_srp
metrics:
- type: byte_perplexity
value: 2.871214785885079
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_swe
type: gsarti/flores_101_swe
metrics:
- type: byte_perplexity
value: 5.054972008155866
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_swh
type: gsarti/flores_101_swh
metrics:
- type: byte_perplexity
value: 3.6973091886730676
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tam
type: gsarti/flores_101_tam
metrics:
- type: byte_perplexity
value: 4.539493400469833
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tel
type: gsarti/flores_101_tel
metrics:
- type: byte_perplexity
value: 5.807499987508966
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tgk
type: gsarti/flores_101_tgk
metrics:
- type: byte_perplexity
value: 3.5994818827380426
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tgl
type: gsarti/flores_101_tgl
metrics:
- type: byte_perplexity
value: 5.667053833119858
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tha
type: gsarti/flores_101_tha
metrics:
- type: byte_perplexity
value: 2.365940201944242
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_tur
type: gsarti/flores_101_tur
metrics:
- type: byte_perplexity
value: 4.885014749844601
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_ukr
type: gsarti/flores_101_ukr
metrics:
- type: byte_perplexity
value: 2.7240934990288483
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_umb
type: gsarti/flores_101_umb
metrics:
- type: byte_perplexity
value: 12.766915508610673
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_urd
type: gsarti/flores_101_urd
metrics:
- type: byte_perplexity
value: 1.9797467071381232
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_uzb
type: gsarti/flores_101_uzb
metrics:
- type: byte_perplexity
value: 12.002337637722146
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_vie
type: gsarti/flores_101_vie
metrics:
- type: byte_perplexity
value: 1.76578415476397
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_wol
type: gsarti/flores_101_wol
metrics:
- type: byte_perplexity
value: 9.144285650306488
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_xho
type: gsarti/flores_101_xho
metrics:
- type: byte_perplexity
value: 7.403240538286952
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_yor
type: gsarti/flores_101_yor
metrics:
- type: byte_perplexity
value: 5.91272037551173
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zho_simpl
type: gsarti/flores_101_zho_simpl
metrics:
- type: byte_perplexity
value: 2.2769070822768533
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zho_trad
type: gsarti/flores_101_zho_trad
metrics:
- type: byte_perplexity
value: 2.5180582198242383
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: gsarti/flores_101_zul
type: gsarti/flores_101_zul
metrics:
- type: byte_perplexity
value: 8.53353320693145
name: byte_perplexity
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: headqa
type: headqa
metrics:
- type: acc
value: 0.26440554339897887
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: hellaswag
type: hellaswag
metrics:
- type: acc
value: 0.41236805417247563
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: logiqa
type: logiqa
metrics:
- type: acc
value: 0.2073732718894009
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mathqa
type: mathqa
metrics:
- type: acc
value: 0.24958123953098826
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mc_taco
type: mc_taco
metrics:
- type: em
value: 0.11936936936936937
name: em
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mnli
type: mnli
metrics:
- type: acc
value: 0.35496688741721855
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mnli_mismatched
type: mnli_mismatched
metrics:
- type: acc
value: 0.35211554109031734
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: mrpc
type: mrpc
metrics:
- type: acc
value: 0.5857843137254902
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: multirc
type: multirc
metrics:
- type: acc
value: 0.5375412541254125
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: openbookqa
type: openbookqa
metrics:
- type: acc
value: 0.216
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: piqa
type: piqa
metrics:
- type: acc
value: 0.7078346028291621
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: prost
type: prost
metrics:
- type: acc
value: 0.22683603757472245
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: pubmedqa
type: pubmedqa
metrics:
- type: acc
value: 0.616
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: qnli
type: qnli
metrics:
- type: acc
value: 0.5072304594545122
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: qqp
type: qqp
metrics:
- type: acc
value: 0.3842443729903537
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: race
type: race
metrics:
- type: acc
value: 0.3521531100478469
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: rte
type: rte
metrics:
- type: acc
value: 0.47653429602888087
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: sciq
type: sciq
metrics:
- type: acc
value: 0.892
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: sst
type: sst
metrics:
- type: acc
value: 0.5177752293577982
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: triviaqa
type: triviaqa
metrics:
- type: acc
value: 0.041633518960487934
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: tydiqa_primary
type: tydiqa_primary
metrics:
- type: acc
value: 0.3011337608795236
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: webqs
type: webqs
metrics:
- type: acc
value: 0.01673228346456693
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wic
type: wic
metrics:
- type: acc
value: 0.5015673981191222
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: winogrande
type: winogrande
metrics:
- type: acc
value: 0.5864246250986582
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wnli
type: wnli
metrics:
- type: acc
value: 0.471830985915493
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: wsc
type: wsc
metrics:
- type: acc
value: 0.4423076923076923
name: acc
verified: false
- task:
type: text-generation
name: text generation
dataset:
name: humaneval
type: humaneval
metrics:
- type: pass@1
value: 0.15524390243902436
name: pass@1
verified: false
- type: pass@10
value: 0.3220367632383857
name: pass@10
verified: false
- type: pass@100
value: 0.5545431515723145
name: pass@100
verified: false
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 3,002,557,440 parameters:
* 642,252,800 embedding parameters
* 30 layers, 32 attention heads
* Hidden layers are 2560-dimensional
* Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11c-2B5-logs)
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of BLOOM models. Its focus is on aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Zero-shot evaluations:**
See this repository for JSON files: https://github.com/bigscience-workshop/evaluation-results
| Task | Language | Metric | BLOOM-2B5 |
|:----|:----|:----|:----:|
| arc_challenge | eng | acc ↑ | 0.28 |
| arc_easy | eng | acc ↑ | 0.595 |
| axb (Median of 10 prompts) | eng | acc ↑ | 0.443 |
| axg (Median of 10 prompts) | eng | acc ↑ | 0.5 |
| boolq (Median of 11 prompts) | eng | acc ↑ | 0.617 |
| cb (Median of 15 prompts) | eng | acc ↑ | 0.304 |
| cola (Median of 5 prompts) | eng | acc ↑ | 0.611 |
| copa (Median of 9 prompts) | eng | acc ↑ | 0.63 |
| crows_pairs_english (Median of 6 prompts) | eng | acc ↑ | 0.497 |
| crows_pairs_french (Median of 7 prompts) | fra | acc ↑ | 0.503 |
| diabla (Median of 2 prompts) | eng | acc ↑ | 0.289 |
| gsarti/flores_101_afr | afr | byte_perplexity ↓ | 6.501 |
| gsarti/flores_101_amh | amh | byte_perplexity ↓ | 3.973 |
| gsarti/flores_101_ara | ara | byte_perplexity ↓ | 1.808 |
| gsarti/flores_101_asm | asm | byte_perplexity ↓ | 5.699 |
| gsarti/flores_101_ast | ast | byte_perplexity ↓ | 3.925 |
| gsarti/flores_101_azj | azj | byte_perplexity ↓ | 6.943 |
| gsarti/flores_101_bel | bel | byte_perplexity ↓ | 3.614 |
| gsarti/flores_101_ben | ben | byte_perplexity ↓ | 5.121 |
| gsarti/flores_101_bos | bos | byte_perplexity ↓ | 5.653 |
| gsarti/flores_101_bul | bul | byte_perplexity ↓ | 2.701 |
| gsarti/flores_101_cat | cat | byte_perplexity ↓ | 2.305 |
| gsarti/flores_101_ceb | ceb | byte_perplexity ↓ | 6.291 |
| gsarti/flores_101_ces | ces | byte_perplexity ↓ | 5.447 |
| gsarti/flores_101_ckb | ckb | byte_perplexity ↓ | 3.726 |
| gsarti/flores_101_cym | cym | byte_perplexity ↓ | 12.539 |
| gsarti/flores_101_dan | dan | byte_perplexity ↓ | 5.183 |
| gsarti/flores_101_deu | deu | byte_perplexity ↓ | 3.118 |
| gsarti/flores_101_ell | ell | byte_perplexity ↓ | 2.468 |
| gsarti/flores_101_eng | eng | byte_perplexity ↓ | 2.019 |
| gsarti/flores_101_est | est | byte_perplexity ↓ | 9.117 |
| gsarti/flores_101_fas | fas | byte_perplexity ↓ | 3.058 |
| gsarti/flores_101_fin | fin | byte_perplexity ↓ | 6.847 |
| gsarti/flores_101_fra | fra | byte_perplexity ↓ | 1.998 |
| gsarti/flores_101_ful | ful | byte_perplexity ↓ | 11.466 |
| gsarti/flores_101_gle | gle | byte_perplexity ↓ | 8.681 |
| gsarti/flores_101_glg | glg | byte_perplexity ↓ | 3.03 |
| gsarti/flores_101_guj | guj | byte_perplexity ↓ | 4.955 |
| gsarti/flores_101_hau | hau | byte_perplexity ↓ | 10.758 |
| gsarti/flores_101_heb | heb | byte_perplexity ↓ | 3.6 |
| gsarti/flores_101_hin | hin | byte_perplexity ↓ | 4.713 |
| gsarti/flores_101_hrv | hrv | byte_perplexity ↓ | 5.822 |
| gsarti/flores_101_hun | hun | byte_perplexity ↓ | 6.44 |
| gsarti/flores_101_hye | hye | byte_perplexity ↓ | 3.658 |
| gsarti/flores_101_ibo | ibo | byte_perplexity ↓ | 5.565 |
| gsarti/flores_101_ind | ind | byte_perplexity ↓ | 2.16 |
| gsarti/flores_101_isl | isl | byte_perplexity ↓ | 8.082 |
| gsarti/flores_101_ita | ita | byte_perplexity ↓ | 2.969 |
| gsarti/flores_101_jav | jav | byte_perplexity ↓ | 7.057 |
| gsarti/flores_101_jpn | jpn | byte_perplexity ↓ | 2.776 |
| gsarti/flores_101_kam | kam | byte_perplexity ↓ | 11.073 |
| gsarti/flores_101_kan | kan | byte_perplexity ↓ | 5.552 |
| gsarti/flores_101_kat | kat | byte_perplexity ↓ | 2.523 |
| gsarti/flores_101_kaz | kaz | byte_perplexity ↓ | 3.39 |
| gsarti/flores_101_kea | kea | byte_perplexity ↓ | 8.919 |
| gsarti/flores_101_kir | kir | byte_perplexity ↓ | 3.729 |
| gsarti/flores_101_kor | kor | byte_perplexity ↓ | 3.933 |
| gsarti/flores_101_lao | lao | byte_perplexity ↓ | 2.908 |
| gsarti/flores_101_lav | lav | byte_perplexity ↓ | 7.777 |
| gsarti/flores_101_lin | lin | byte_perplexity ↓ | 7.525 |
| gsarti/flores_101_lit | lit | byte_perplexity ↓ | 7.369 |
| gsarti/flores_101_ltz | ltz | byte_perplexity ↓ | 8.801 |
| gsarti/flores_101_lug | lug | byte_perplexity ↓ | 8.483 |
| gsarti/flores_101_luo | luo | byte_perplexity ↓ | 11.976 |
| gsarti/flores_101_mal | mal | byte_perplexity ↓ | 4.616 |
| gsarti/flores_101_mar | mar | byte_perplexity ↓ | 5.483 |
| gsarti/flores_101_mkd | mkd | byte_perplexity ↓ | 2.966 |
| gsarti/flores_101_mlt | mlt | byte_perplexity ↓ | 15.005 |
| gsarti/flores_101_mon | mon | byte_perplexity ↓ | 3.411 |
| gsarti/flores_101_mri | mri | byte_perplexity ↓ | 7.474 |
| gsarti/flores_101_msa | msa | byte_perplexity ↓ | 2.571 |
| gsarti/flores_101_mya | mya | byte_perplexity ↓ | 2.414 |
| gsarti/flores_101_nld | nld | byte_perplexity ↓ | 4.128 |
| gsarti/flores_101_nob | nob | byte_perplexity ↓ | 5.403 |
| gsarti/flores_101_npi | npi | byte_perplexity ↓ | 5.199 |
| gsarti/flores_101_nso | nso | byte_perplexity ↓ | 8.155 |
| gsarti/flores_101_nya | nya | byte_perplexity ↓ | 8.18 |
| gsarti/flores_101_oci | oci | byte_perplexity ↓ | 4.862 |
| gsarti/flores_101_orm | orm | byte_perplexity ↓ | 12.912 |
| gsarti/flores_101_ory | ory | byte_perplexity ↓ | 5.189 |
| gsarti/flores_101_pan | pan | byte_perplexity ↓ | 4.698 |
| gsarti/flores_101_pol | pol | byte_perplexity ↓ | 4.626 |
| gsarti/flores_101_por | por | byte_perplexity ↓ | 1.975 |
| gsarti/flores_101_pus | pus | byte_perplexity ↓ | 4.496 |
| gsarti/flores_101_ron | ron | byte_perplexity ↓ | 4.965 |
| gsarti/flores_101_rus | rus | byte_perplexity ↓ | 2.05 |
| gsarti/flores_101_slk | slk | byte_perplexity ↓ | 6.451 |
| gsarti/flores_101_slv | slv | byte_perplexity ↓ | 6.62 |
| gsarti/flores_101_sna | sna | byte_perplexity ↓ | 8.462 |
| gsarti/flores_101_snd | snd | byte_perplexity ↓ | 5.466 |
| gsarti/flores_101_som | som | byte_perplexity ↓ | 11.959 |
| gsarti/flores_101_spa | spa | byte_perplexity ↓ | 1.897 |
| gsarti/flores_101_srp | srp | byte_perplexity ↓ | 2.871 |
| gsarti/flores_101_swe | swe | byte_perplexity ↓ | 5.055 |
| gsarti/flores_101_swh | swh | byte_perplexity ↓ | 3.697 |
| gsarti/flores_101_tam | tam | byte_perplexity ↓ | 4.539 |
| gsarti/flores_101_tel | tel | byte_perplexity ↓ | 5.807 |
| gsarti/flores_101_tgk | tgk | byte_perplexity ↓ | 3.599 |
| gsarti/flores_101_tgl | tgl | byte_perplexity ↓ | 5.667 |
| gsarti/flores_101_tha | tha | byte_perplexity ↓ | 2.366 |
| gsarti/flores_101_tur | tur | byte_perplexity ↓ | 4.885 |
| gsarti/flores_101_ukr | ukr | byte_perplexity ↓ | 2.724 |
| gsarti/flores_101_umb | umb | byte_perplexity ↓ | 12.767 |
| gsarti/flores_101_urd | urd | byte_perplexity ↓ | 1.98 |
| gsarti/flores_101_uzb | uzb | byte_perplexity ↓ | 12.002 |
| gsarti/flores_101_vie | vie | byte_perplexity ↓ | 1.766 |
| gsarti/flores_101_wol | wol | byte_perplexity ↓ | 9.144 |
| gsarti/flores_101_xho | xho | byte_perplexity ↓ | 7.403 |
| gsarti/flores_101_yor | yor | byte_perplexity ↓ | 5.913 |
| gsarti/flores_101_zho_simpl | zho_simpl | byte_perplexity ↓ | 2.277 |
| gsarti/flores_101_zho_trad | zho_trad | byte_perplexity ↓ | 2.518 |
| gsarti/flores_101_zul | zul | byte_perplexity ↓ | 8.534 |
| headqa | esp | acc ↑ | 0.264 |
| hellaswag | eng | acc ↑ | 0.412 |
| logiqa | eng | acc ↑ | 0.207 |
| mathqa | eng | acc ↑ | 0.25 |
| mc_taco | eng | em ↑ | 0.119 |
| mnli (Median of 15 prompts) | eng | acc ↑ | 0.355 |
| mnli_mismatched (Median of 15 prompts) | eng | acc ↑ | 0.352 |
| mrpc | eng | acc ↑ | 0.586 |
| multirc (Median of 11 prompts) | eng | acc ↑ | 0.538 |
| openbookqa | eng | acc ↑ | 0.216 |
| piqa | eng | acc ↑ | 0.708 |
| prost | eng | acc ↑ | 0.227 |
| pubmedqa | eng | acc ↑ | 0.616 |
| qnli | eng | acc ↑ | 0.507 |
| qqp (Median of 7 prompts) | eng | acc ↑ | 0.384 |
| race | eng | acc ↑ | 0.352 |
| rte (Median of 6 prompts) | eng | acc ↑ | 0.477 |
| sciq | eng | acc ↑ | 0.892 |
| sst (Median of 6 prompts) | eng | acc ↑ | 0.518 |
| triviaqa | eng | acc ↑ | 0.042 |
| tydiqa_primary (Median of 24 prompts) | eng | acc ↑ | 0.301 |
| webqs | eng | acc ↑ | 0.017 |
| wic (Median of 11 prompts) | eng | acc ↑ | 0.502 |
| winogrande | eng | acc ↑ | 0.586 |
| wnli (Median of 6 prompts) | eng | acc ↑ | 0.472 |
| wsc (Median of 11 prompts) | eng | acc ↑ | 0.442 |
| humaneval | python | pass@1 ↑ | 0.155 |
| humaneval | python | pass@10 ↑ | 0.322 |
| humaneval | python | pass@100 ↑ | 0.555 |
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
| [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | [
"PUBMEDQA",
"SCIQ"
] |
rttl-ai/BIOptimus | rttl-ai | fill-mask | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"biology",
"medical",
"en",
"dataset:pubmed",
"arxiv:2308.08625",
"arxiv:2312.02803",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-23T21:22:37 | 2024-06-08T17:26:42 | 31 | 2 | ---
datasets:
- pubmed
language:
- en
license: apache-2.0
tags:
- biology
- medical
---
# rttl-ai/BIOptimus v.0.4
## Model Details
**Model Description:** BIOptimus v.0.4 model is a BERT-like model pre-trained on [PubMed](https://pubmed.ncbi.nlm.nih.gov/) abstracts.
It is a biomedical language model pre-trained using contextualized weight distillation and Curriculum Learning.
This model achieves state-of-the-art performance on several biomedical NER datasets from [BLURB benchmark](https://microsoft.github.io/BLURB/).
- **Developed by:** rttl-ai
- **Model Type:** Language model
- **Language(s):** English
- **License:** Apache-2.0
- **Resources for more information:**
- It is introduced in the paper BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition (BioNLP workshop @ ACL 2023).
- [arxiv](https://arxiv.org/abs/2308.08625)
- [arxiv](https://arxiv.org/abs/2312.02803)
- More information is available in [this repository](https://github.com/rttl-ai/BIOptimus).
| [
"NAMED_ENTITY_RECOGNITION"
] | [
"BLURB"
] |
radia/snowflake-arctic-embed-l-Q4_K_M-GGUF | radia | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:quantized:Snowflake/snowflake-arctic-embed-l",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-06-01T15:38:33 | 2024-06-01T15:38:36 | 31 | 0 | ---
base_model: Snowflake/snowflake-arctic-embed-l
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: snowflake-arctic-embed-l
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 74.80597014925374
- type: ap
value: 37.911466766189875
- type: f1
value: 68.88606927542106
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 78.402275
- type: ap
value: 73.03294793248114
- type: f1
value: 78.3147786132161
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 36.717999999999996
- type: f1
value: 35.918044248787766
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: mteb/arguana
config: default
split: test
revision: c22ab2a51041ffd869aaddef7af8d8215647e41a
metrics:
- type: map_at_1
value: 34.495
- type: map_at_10
value: 50.236000000000004
- type: map_at_100
value: 50.944
- type: map_at_1000
value: 50.94499999999999
- type: map_at_3
value: 45.341
- type: map_at_5
value: 48.286
- type: mrr_at_1
value: 35.135
- type: mrr_at_10
value: 50.471
- type: mrr_at_100
value: 51.185
- type: mrr_at_1000
value: 51.187000000000005
- type: mrr_at_3
value: 45.602
- type: mrr_at_5
value: 48.468
- type: ndcg_at_1
value: 34.495
- type: ndcg_at_10
value: 59.086000000000006
- type: ndcg_at_100
value: 61.937
- type: ndcg_at_1000
value: 61.966
- type: ndcg_at_3
value: 49.062
- type: ndcg_at_5
value: 54.367
- type: precision_at_1
value: 34.495
- type: precision_at_10
value: 8.734
- type: precision_at_100
value: 0.9939999999999999
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 19.962
- type: precision_at_5
value: 14.552000000000001
- type: recall_at_1
value: 34.495
- type: recall_at_10
value: 87.33999999999999
- type: recall_at_100
value: 99.431
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 59.885999999999996
- type: recall_at_5
value: 72.76
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 47.46440874635501
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 38.28720154213723
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 60.34614226394902
- type: mrr
value: 75.05628105351096
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 87.41072716728198
- type: cos_sim_spearman
value: 86.34534093114372
- type: euclidean_pearson
value: 85.34009667750838
- type: euclidean_spearman
value: 86.34534093114372
- type: manhattan_pearson
value: 85.2158833586889
- type: manhattan_spearman
value: 86.60920236509224
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 80.06493506493507
- type: f1
value: 79.28108600339833
- task:
type: Clustering
dataset:
name: MTEB BigPatentClustering
type: jinaai/big-patent-clustering
config: default
split: test
revision: 62d5330920bca426ce9d3c76ea914f15fc83e891
metrics:
- type: v_measure
value: 20.545049432417287
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 37.54369718479804
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.64941588219162
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: mteb/cqadupstack-android
config: default
split: test
revision: f46a197baaae43b4f621051089b82a364682dfeb
metrics:
- type: map_at_1
value: 37.264
- type: map_at_10
value: 49.43
- type: map_at_100
value: 50.967
- type: map_at_1000
value: 51.08200000000001
- type: map_at_3
value: 45.742
- type: map_at_5
value: 47.764
- type: mrr_at_1
value: 44.921
- type: mrr_at_10
value: 54.879999999999995
- type: mrr_at_100
value: 55.525000000000006
- type: mrr_at_1000
value: 55.565
- type: mrr_at_3
value: 52.480000000000004
- type: mrr_at_5
value: 53.86
- type: ndcg_at_1
value: 44.921
- type: ndcg_at_10
value: 55.664
- type: ndcg_at_100
value: 60.488
- type: ndcg_at_1000
value: 62.138000000000005
- type: ndcg_at_3
value: 50.797000000000004
- type: ndcg_at_5
value: 52.94799999999999
- type: precision_at_1
value: 44.921
- type: precision_at_10
value: 10.587
- type: precision_at_100
value: 1.629
- type: precision_at_1000
value: 0.203
- type: precision_at_3
value: 24.034
- type: precision_at_5
value: 17.224999999999998
- type: recall_at_1
value: 37.264
- type: recall_at_10
value: 67.15
- type: recall_at_100
value: 86.811
- type: recall_at_1000
value: 97.172
- type: recall_at_3
value: 53.15800000000001
- type: recall_at_5
value: 59.116
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackEnglishRetrieval
type: mteb/cqadupstack-english
config: default
split: test
revision: ad9991cb51e31e31e430383c75ffb2885547b5f0
metrics:
- type: map_at_1
value: 36.237
- type: map_at_10
value: 47.941
- type: map_at_100
value: 49.131
- type: map_at_1000
value: 49.26
- type: map_at_3
value: 44.561
- type: map_at_5
value: 46.28
- type: mrr_at_1
value: 45.605000000000004
- type: mrr_at_10
value: 54.039
- type: mrr_at_100
value: 54.653
- type: mrr_at_1000
value: 54.688
- type: mrr_at_3
value: 52.006
- type: mrr_at_5
value: 53.096
- type: ndcg_at_1
value: 45.605000000000004
- type: ndcg_at_10
value: 53.916
- type: ndcg_at_100
value: 57.745999999999995
- type: ndcg_at_1000
value: 59.492999999999995
- type: ndcg_at_3
value: 49.774
- type: ndcg_at_5
value: 51.434999999999995
- type: precision_at_1
value: 45.605000000000004
- type: precision_at_10
value: 10.229000000000001
- type: precision_at_100
value: 1.55
- type: precision_at_1000
value: 0.2
- type: precision_at_3
value: 24.098
- type: precision_at_5
value: 16.726
- type: recall_at_1
value: 36.237
- type: recall_at_10
value: 64.03
- type: recall_at_100
value: 80.423
- type: recall_at_1000
value: 91.03
- type: recall_at_3
value: 51.20400000000001
- type: recall_at_5
value: 56.298
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGamingRetrieval
type: mteb/cqadupstack-gaming
config: default
split: test
revision: 4885aa143210c98657558c04aaf3dc47cfb54340
metrics:
- type: map_at_1
value: 47.278
- type: map_at_10
value: 59.757000000000005
- type: map_at_100
value: 60.67
- type: map_at_1000
value: 60.714
- type: map_at_3
value: 56.714
- type: map_at_5
value: 58.453
- type: mrr_at_1
value: 53.73
- type: mrr_at_10
value: 62.970000000000006
- type: mrr_at_100
value: 63.507999999999996
- type: mrr_at_1000
value: 63.53
- type: mrr_at_3
value: 60.909
- type: mrr_at_5
value: 62.172000000000004
- type: ndcg_at_1
value: 53.73
- type: ndcg_at_10
value: 64.97
- type: ndcg_at_100
value: 68.394
- type: ndcg_at_1000
value: 69.255
- type: ndcg_at_3
value: 60.228
- type: ndcg_at_5
value: 62.617999999999995
- type: precision_at_1
value: 53.73
- type: precision_at_10
value: 10.056
- type: precision_at_100
value: 1.265
- type: precision_at_1000
value: 0.13699999999999998
- type: precision_at_3
value: 26.332
- type: precision_at_5
value: 17.743000000000002
- type: recall_at_1
value: 47.278
- type: recall_at_10
value: 76.86500000000001
- type: recall_at_100
value: 91.582
- type: recall_at_1000
value: 97.583
- type: recall_at_3
value: 64.443
- type: recall_at_5
value: 70.283
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackGisRetrieval
type: mteb/cqadupstack-gis
config: default
split: test
revision: 5003b3064772da1887988e05400cf3806fe491f2
metrics:
- type: map_at_1
value: 29.702
- type: map_at_10
value: 39.463
- type: map_at_100
value: 40.508
- type: map_at_1000
value: 40.579
- type: map_at_3
value: 36.748999999999995
- type: map_at_5
value: 38.296
- type: mrr_at_1
value: 31.977
- type: mrr_at_10
value: 41.739
- type: mrr_at_100
value: 42.586
- type: mrr_at_1000
value: 42.636
- type: mrr_at_3
value: 39.096
- type: mrr_at_5
value: 40.695
- type: ndcg_at_1
value: 31.977
- type: ndcg_at_10
value: 44.855000000000004
- type: ndcg_at_100
value: 49.712
- type: ndcg_at_1000
value: 51.443000000000005
- type: ndcg_at_3
value: 39.585
- type: ndcg_at_5
value: 42.244
- type: precision_at_1
value: 31.977
- type: precision_at_10
value: 6.768000000000001
- type: precision_at_100
value: 0.9690000000000001
- type: precision_at_1000
value: 0.116
- type: precision_at_3
value: 16.761
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 29.702
- type: recall_at_10
value: 59.082
- type: recall_at_100
value: 80.92
- type: recall_at_1000
value: 93.728
- type: recall_at_3
value: 45.212
- type: recall_at_5
value: 51.449
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackMathematicaRetrieval
type: mteb/cqadupstack-mathematica
config: default
split: test
revision: 90fceea13679c63fe563ded68f3b6f06e50061de
metrics:
- type: map_at_1
value: 21.336
- type: map_at_10
value: 30.137999999999998
- type: map_at_100
value: 31.385
- type: map_at_1000
value: 31.495
- type: map_at_3
value: 27.481
- type: map_at_5
value: 28.772
- type: mrr_at_1
value: 25.871
- type: mrr_at_10
value: 34.686
- type: mrr_at_100
value: 35.649
- type: mrr_at_1000
value: 35.705
- type: mrr_at_3
value: 32.09
- type: mrr_at_5
value: 33.52
- type: ndcg_at_1
value: 25.871
- type: ndcg_at_10
value: 35.617
- type: ndcg_at_100
value: 41.272999999999996
- type: ndcg_at_1000
value: 43.725
- type: ndcg_at_3
value: 30.653999999999996
- type: ndcg_at_5
value: 32.714
- type: precision_at_1
value: 25.871
- type: precision_at_10
value: 6.4799999999999995
- type: precision_at_100
value: 1.0699999999999998
- type: precision_at_1000
value: 0.13999999999999999
- type: precision_at_3
value: 14.469000000000001
- type: precision_at_5
value: 10.274
- type: recall_at_1
value: 21.336
- type: recall_at_10
value: 47.746
- type: recall_at_100
value: 71.773
- type: recall_at_1000
value: 89.05199999999999
- type: recall_at_3
value: 34.172999999999995
- type: recall_at_5
value: 39.397999999999996
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackPhysicsRetrieval
type: mteb/cqadupstack-physics
config: default
split: test
revision: 79531abbd1fb92d06c6d6315a0cbbbf5bb247ea4
metrics:
- type: map_at_1
value: 34.424
- type: map_at_10
value: 45.647999999999996
- type: map_at_100
value: 46.907
- type: map_at_1000
value: 47.010999999999996
- type: map_at_3
value: 42.427
- type: map_at_5
value: 44.285000000000004
- type: mrr_at_1
value: 41.867
- type: mrr_at_10
value: 51.17699999999999
- type: mrr_at_100
value: 51.937
- type: mrr_at_1000
value: 51.975
- type: mrr_at_3
value: 48.941
- type: mrr_at_5
value: 50.322
- type: ndcg_at_1
value: 41.867
- type: ndcg_at_10
value: 51.534
- type: ndcg_at_100
value: 56.696999999999996
- type: ndcg_at_1000
value: 58.475
- type: ndcg_at_3
value: 46.835
- type: ndcg_at_5
value: 49.161
- type: precision_at_1
value: 41.867
- type: precision_at_10
value: 9.134
- type: precision_at_100
value: 1.362
- type: precision_at_1000
value: 0.17099999999999999
- type: precision_at_3
value: 22.073
- type: precision_at_5
value: 15.495999999999999
- type: recall_at_1
value: 34.424
- type: recall_at_10
value: 63.237
- type: recall_at_100
value: 84.774
- type: recall_at_1000
value: 95.987
- type: recall_at_3
value: 49.888
- type: recall_at_5
value: 55.940999999999995
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackProgrammersRetrieval
type: mteb/cqadupstack-programmers
config: default
split: test
revision: 6184bc1440d2dbc7612be22b50686b8826d22b32
metrics:
- type: map_at_1
value: 30.72
- type: map_at_10
value: 41.327999999999996
- type: map_at_100
value: 42.651
- type: map_at_1000
value: 42.739
- type: map_at_3
value: 38.223
- type: map_at_5
value: 40.053
- type: mrr_at_1
value: 37.9
- type: mrr_at_10
value: 46.857
- type: mrr_at_100
value: 47.673
- type: mrr_at_1000
value: 47.711999999999996
- type: mrr_at_3
value: 44.292
- type: mrr_at_5
value: 45.845
- type: ndcg_at_1
value: 37.9
- type: ndcg_at_10
value: 47.105999999999995
- type: ndcg_at_100
value: 52.56999999999999
- type: ndcg_at_1000
value: 54.37800000000001
- type: ndcg_at_3
value: 42.282
- type: ndcg_at_5
value: 44.646
- type: precision_at_1
value: 37.9
- type: precision_at_10
value: 8.368
- type: precision_at_100
value: 1.283
- type: precision_at_1000
value: 0.16
- type: precision_at_3
value: 20.015
- type: precision_at_5
value: 14.132
- type: recall_at_1
value: 30.72
- type: recall_at_10
value: 58.826
- type: recall_at_100
value: 82.104
- type: recall_at_1000
value: 94.194
- type: recall_at_3
value: 44.962999999999994
- type: recall_at_5
value: 51.426
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackRetrieval
type: mteb/cqadupstack
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 31.656583333333334
- type: map_at_10
value: 41.59883333333333
- type: map_at_100
value: 42.80350000000001
- type: map_at_1000
value: 42.91075
- type: map_at_3
value: 38.68908333333333
- type: map_at_5
value: 40.27733333333334
- type: mrr_at_1
value: 37.23483333333334
- type: mrr_at_10
value: 45.782000000000004
- type: mrr_at_100
value: 46.577083333333334
- type: mrr_at_1000
value: 46.62516666666667
- type: mrr_at_3
value: 43.480666666666664
- type: mrr_at_5
value: 44.79833333333333
- type: ndcg_at_1
value: 37.23483333333334
- type: ndcg_at_10
value: 46.971500000000006
- type: ndcg_at_100
value: 51.90125
- type: ndcg_at_1000
value: 53.86366666666667
- type: ndcg_at_3
value: 42.31791666666667
- type: ndcg_at_5
value: 44.458666666666666
- type: precision_at_1
value: 37.23483333333334
- type: precision_at_10
value: 8.044583333333332
- type: precision_at_100
value: 1.2334166666666666
- type: precision_at_1000
value: 0.15925
- type: precision_at_3
value: 19.240833333333327
- type: precision_at_5
value: 13.435083333333333
- type: recall_at_1
value: 31.656583333333334
- type: recall_at_10
value: 58.44758333333333
- type: recall_at_100
value: 79.93658333333332
- type: recall_at_1000
value: 93.32491666666668
- type: recall_at_3
value: 45.44266666666667
- type: recall_at_5
value: 50.99866666666666
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackStatsRetrieval
type: mteb/cqadupstack-stats
config: default
split: test
revision: 65ac3a16b8e91f9cee4c9828cc7c335575432a2a
metrics:
- type: map_at_1
value: 28.247
- type: map_at_10
value: 35.443999999999996
- type: map_at_100
value: 36.578
- type: map_at_1000
value: 36.675999999999995
- type: map_at_3
value: 33.276
- type: map_at_5
value: 34.536
- type: mrr_at_1
value: 31.747999999999998
- type: mrr_at_10
value: 38.413000000000004
- type: mrr_at_100
value: 39.327
- type: mrr_at_1000
value: 39.389
- type: mrr_at_3
value: 36.401
- type: mrr_at_5
value: 37.543
- type: ndcg_at_1
value: 31.747999999999998
- type: ndcg_at_10
value: 39.646
- type: ndcg_at_100
value: 44.861000000000004
- type: ndcg_at_1000
value: 47.197
- type: ndcg_at_3
value: 35.764
- type: ndcg_at_5
value: 37.635999999999996
- type: precision_at_1
value: 31.747999999999998
- type: precision_at_10
value: 6.12
- type: precision_at_100
value: 0.942
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.235000000000001
- type: precision_at_5
value: 10.491
- type: recall_at_1
value: 28.247
- type: recall_at_10
value: 49.456
- type: recall_at_100
value: 73.02499999999999
- type: recall_at_1000
value: 89.898
- type: recall_at_3
value: 38.653999999999996
- type: recall_at_5
value: 43.259
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackTexRetrieval
type: mteb/cqadupstack-tex
config: default
split: test
revision: 46989137a86843e03a6195de44b09deda022eec7
metrics:
- type: map_at_1
value: 22.45
- type: map_at_10
value: 30.476999999999997
- type: map_at_100
value: 31.630999999999997
- type: map_at_1000
value: 31.755
- type: map_at_3
value: 27.989000000000004
- type: map_at_5
value: 29.410999999999998
- type: mrr_at_1
value: 26.979
- type: mrr_at_10
value: 34.316
- type: mrr_at_100
value: 35.272999999999996
- type: mrr_at_1000
value: 35.342
- type: mrr_at_3
value: 32.14
- type: mrr_at_5
value: 33.405
- type: ndcg_at_1
value: 26.979
- type: ndcg_at_10
value: 35.166
- type: ndcg_at_100
value: 40.583000000000006
- type: ndcg_at_1000
value: 43.282
- type: ndcg_at_3
value: 30.916
- type: ndcg_at_5
value: 32.973
- type: precision_at_1
value: 26.979
- type: precision_at_10
value: 6.132
- type: precision_at_100
value: 1.047
- type: precision_at_1000
value: 0.145
- type: precision_at_3
value: 14.360999999999999
- type: precision_at_5
value: 10.227
- type: recall_at_1
value: 22.45
- type: recall_at_10
value: 45.348
- type: recall_at_100
value: 69.484
- type: recall_at_1000
value: 88.628
- type: recall_at_3
value: 33.338
- type: recall_at_5
value: 38.746
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackUnixRetrieval
type: mteb/cqadupstack-unix
config: default
split: test
revision: 6c6430d3a6d36f8d2a829195bc5dc94d7e063e53
metrics:
- type: map_at_1
value: 32.123000000000005
- type: map_at_10
value: 41.778
- type: map_at_100
value: 42.911
- type: map_at_1000
value: 42.994
- type: map_at_3
value: 38.558
- type: map_at_5
value: 40.318
- type: mrr_at_1
value: 37.687
- type: mrr_at_10
value: 45.889
- type: mrr_at_100
value: 46.672999999999995
- type: mrr_at_1000
value: 46.72
- type: mrr_at_3
value: 43.33
- type: mrr_at_5
value: 44.734
- type: ndcg_at_1
value: 37.687
- type: ndcg_at_10
value: 47.258
- type: ndcg_at_100
value: 52.331
- type: ndcg_at_1000
value: 54.152
- type: ndcg_at_3
value: 41.857
- type: ndcg_at_5
value: 44.283
- type: precision_at_1
value: 37.687
- type: precision_at_10
value: 7.892
- type: precision_at_100
value: 1.183
- type: precision_at_1000
value: 0.14300000000000002
- type: precision_at_3
value: 18.781
- type: precision_at_5
value: 13.134
- type: recall_at_1
value: 32.123000000000005
- type: recall_at_10
value: 59.760000000000005
- type: recall_at_100
value: 81.652
- type: recall_at_1000
value: 94.401
- type: recall_at_3
value: 44.996
- type: recall_at_5
value: 51.184
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWebmastersRetrieval
type: mteb/cqadupstack-webmasters
config: default
split: test
revision: 160c094312a0e1facb97e55eeddb698c0abe3571
metrics:
- type: map_at_1
value: 33.196999999999996
- type: map_at_10
value: 42.012
- type: map_at_100
value: 43.663999999999994
- type: map_at_1000
value: 43.883
- type: map_at_3
value: 39.33
- type: map_at_5
value: 40.586
- type: mrr_at_1
value: 39.328
- type: mrr_at_10
value: 46.57
- type: mrr_at_100
value: 47.508
- type: mrr_at_1000
value: 47.558
- type: mrr_at_3
value: 44.532
- type: mrr_at_5
value: 45.58
- type: ndcg_at_1
value: 39.328
- type: ndcg_at_10
value: 47.337
- type: ndcg_at_100
value: 52.989
- type: ndcg_at_1000
value: 55.224
- type: ndcg_at_3
value: 43.362
- type: ndcg_at_5
value: 44.866
- type: precision_at_1
value: 39.328
- type: precision_at_10
value: 8.577
- type: precision_at_100
value: 1.5789999999999997
- type: precision_at_1000
value: 0.25
- type: precision_at_3
value: 19.697
- type: precision_at_5
value: 13.755
- type: recall_at_1
value: 33.196999999999996
- type: recall_at_10
value: 56.635000000000005
- type: recall_at_100
value: 81.882
- type: recall_at_1000
value: 95.342
- type: recall_at_3
value: 44.969
- type: recall_at_5
value: 49.266
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackWordpressRetrieval
type: mteb/cqadupstack-wordpress
config: default
split: test
revision: 4ffe81d471b1924886b33c7567bfb200e9eec5c4
metrics:
- type: map_at_1
value: 26.901000000000003
- type: map_at_10
value: 35.77
- type: map_at_100
value: 36.638999999999996
- type: map_at_1000
value: 36.741
- type: map_at_3
value: 33.219
- type: map_at_5
value: 34.574
- type: mrr_at_1
value: 29.205
- type: mrr_at_10
value: 37.848
- type: mrr_at_100
value: 38.613
- type: mrr_at_1000
value: 38.682
- type: mrr_at_3
value: 35.551
- type: mrr_at_5
value: 36.808
- type: ndcg_at_1
value: 29.205
- type: ndcg_at_10
value: 40.589
- type: ndcg_at_100
value: 45.171
- type: ndcg_at_1000
value: 47.602
- type: ndcg_at_3
value: 35.760999999999996
- type: ndcg_at_5
value: 37.980000000000004
- type: precision_at_1
value: 29.205
- type: precision_at_10
value: 6.192
- type: precision_at_100
value: 0.922
- type: precision_at_1000
value: 0.123
- type: precision_at_3
value: 15.034
- type: precision_at_5
value: 10.424999999999999
- type: recall_at_1
value: 26.901000000000003
- type: recall_at_10
value: 53.236000000000004
- type: recall_at_100
value: 74.809
- type: recall_at_1000
value: 92.884
- type: recall_at_3
value: 40.314
- type: recall_at_5
value: 45.617999999999995
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: mteb/climate-fever
config: default
split: test
revision: 47f2ac6acb640fc46020b02a5b59fdda04d39380
metrics:
- type: map_at_1
value: 16.794999999999998
- type: map_at_10
value: 29.322
- type: map_at_100
value: 31.463
- type: map_at_1000
value: 31.643
- type: map_at_3
value: 24.517
- type: map_at_5
value: 27.237000000000002
- type: mrr_at_1
value: 37.655
- type: mrr_at_10
value: 50.952
- type: mrr_at_100
value: 51.581999999999994
- type: mrr_at_1000
value: 51.61
- type: mrr_at_3
value: 47.991
- type: mrr_at_5
value: 49.744
- type: ndcg_at_1
value: 37.655
- type: ndcg_at_10
value: 39.328
- type: ndcg_at_100
value: 46.358
- type: ndcg_at_1000
value: 49.245
- type: ndcg_at_3
value: 33.052
- type: ndcg_at_5
value: 35.407
- type: precision_at_1
value: 37.655
- type: precision_at_10
value: 12.202
- type: precision_at_100
value: 1.9789999999999999
- type: precision_at_1000
value: 0.252
- type: precision_at_3
value: 24.973
- type: precision_at_5
value: 19.075
- type: recall_at_1
value: 16.794999999999998
- type: recall_at_10
value: 45.716
- type: recall_at_100
value: 68.919
- type: recall_at_1000
value: 84.71600000000001
- type: recall_at_3
value: 30.135
- type: recall_at_5
value: 37.141999999999996
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: mteb/dbpedia
config: default
split: test
revision: c0f706b76e590d620bd6618b3ca8efdd34e2d659
metrics:
- type: map_at_1
value: 9.817
- type: map_at_10
value: 22.058
- type: map_at_100
value: 31.805
- type: map_at_1000
value: 33.562999999999995
- type: map_at_3
value: 15.537
- type: map_at_5
value: 18.199
- type: mrr_at_1
value: 72.75
- type: mrr_at_10
value: 79.804
- type: mrr_at_100
value: 80.089
- type: mrr_at_1000
value: 80.09100000000001
- type: mrr_at_3
value: 78.75
- type: mrr_at_5
value: 79.325
- type: ndcg_at_1
value: 59.875
- type: ndcg_at_10
value: 45.972
- type: ndcg_at_100
value: 51.092999999999996
- type: ndcg_at_1000
value: 58.048
- type: ndcg_at_3
value: 50.552
- type: ndcg_at_5
value: 47.672
- type: precision_at_1
value: 72.75
- type: precision_at_10
value: 37.05
- type: precision_at_100
value: 12.005
- type: precision_at_1000
value: 2.221
- type: precision_at_3
value: 54.083000000000006
- type: precision_at_5
value: 46.2
- type: recall_at_1
value: 9.817
- type: recall_at_10
value: 27.877000000000002
- type: recall_at_100
value: 57.974000000000004
- type: recall_at_1000
value: 80.085
- type: recall_at_3
value: 16.911
- type: recall_at_5
value: 20.689
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 46.464999999999996
- type: f1
value: 42.759588662873796
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: mteb/fever
config: default
split: test
revision: bea83ef9e8fb933d90a2f1d5515737465d613e12
metrics:
- type: map_at_1
value: 75.82900000000001
- type: map_at_10
value: 84.613
- type: map_at_100
value: 84.845
- type: map_at_1000
value: 84.855
- type: map_at_3
value: 83.498
- type: map_at_5
value: 84.29299999999999
- type: mrr_at_1
value: 81.69800000000001
- type: mrr_at_10
value: 88.84100000000001
- type: mrr_at_100
value: 88.887
- type: mrr_at_1000
value: 88.888
- type: mrr_at_3
value: 88.179
- type: mrr_at_5
value: 88.69200000000001
- type: ndcg_at_1
value: 81.69800000000001
- type: ndcg_at_10
value: 88.21799999999999
- type: ndcg_at_100
value: 88.961
- type: ndcg_at_1000
value: 89.131
- type: ndcg_at_3
value: 86.591
- type: ndcg_at_5
value: 87.666
- type: precision_at_1
value: 81.69800000000001
- type: precision_at_10
value: 10.615
- type: precision_at_100
value: 1.125
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 33.208
- type: precision_at_5
value: 20.681
- type: recall_at_1
value: 75.82900000000001
- type: recall_at_10
value: 94.97
- type: recall_at_100
value: 97.786
- type: recall_at_1000
value: 98.809
- type: recall_at_3
value: 90.625
- type: recall_at_5
value: 93.345
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: mteb/fiqa
config: default
split: test
revision: 27a168819829fe9bcd655c2df245fb19452e8e06
metrics:
- type: map_at_1
value: 22.788
- type: map_at_10
value: 36.71
- type: map_at_100
value: 38.527
- type: map_at_1000
value: 38.701
- type: map_at_3
value: 32.318999999999996
- type: map_at_5
value: 34.809
- type: mrr_at_1
value: 44.444
- type: mrr_at_10
value: 52.868
- type: mrr_at_100
value: 53.52400000000001
- type: mrr_at_1000
value: 53.559999999999995
- type: mrr_at_3
value: 50.153999999999996
- type: mrr_at_5
value: 51.651
- type: ndcg_at_1
value: 44.444
- type: ndcg_at_10
value: 44.707
- type: ndcg_at_100
value: 51.174
- type: ndcg_at_1000
value: 53.996
- type: ndcg_at_3
value: 40.855999999999995
- type: ndcg_at_5
value: 42.113
- type: precision_at_1
value: 44.444
- type: precision_at_10
value: 12.021999999999998
- type: precision_at_100
value: 1.8950000000000002
- type: precision_at_1000
value: 0.241
- type: precision_at_3
value: 26.8
- type: precision_at_5
value: 19.66
- type: recall_at_1
value: 22.788
- type: recall_at_10
value: 51.793
- type: recall_at_100
value: 75.69500000000001
- type: recall_at_1000
value: 92.292
- type: recall_at_3
value: 37.375
- type: recall_at_5
value: 43.682
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: mteb/hotpotqa
config: default
split: test
revision: ab518f4d6fcca38d87c25209f94beba119d02014
metrics:
- type: map_at_1
value: 41.276
- type: map_at_10
value: 67.245
- type: map_at_100
value: 68.061
- type: map_at_1000
value: 68.11399999999999
- type: map_at_3
value: 63.693
- type: map_at_5
value: 65.90899999999999
- type: mrr_at_1
value: 82.552
- type: mrr_at_10
value: 87.741
- type: mrr_at_100
value: 87.868
- type: mrr_at_1000
value: 87.871
- type: mrr_at_3
value: 86.98599999999999
- type: mrr_at_5
value: 87.469
- type: ndcg_at_1
value: 82.552
- type: ndcg_at_10
value: 75.176
- type: ndcg_at_100
value: 77.902
- type: ndcg_at_1000
value: 78.852
- type: ndcg_at_3
value: 70.30499999999999
- type: ndcg_at_5
value: 73.00999999999999
- type: precision_at_1
value: 82.552
- type: precision_at_10
value: 15.765
- type: precision_at_100
value: 1.788
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 45.375
- type: precision_at_5
value: 29.360999999999997
- type: recall_at_1
value: 41.276
- type: recall_at_10
value: 78.825
- type: recall_at_100
value: 89.41900000000001
- type: recall_at_1000
value: 95.625
- type: recall_at_3
value: 68.062
- type: recall_at_5
value: 73.40299999999999
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 72.876
- type: ap
value: 67.15477852410164
- type: f1
value: 72.65147370025373
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: mteb/msmarco
config: default
split: dev
revision: c5a29a104738b98a9e76336939199e264163d4a0
metrics:
- type: map_at_1
value: 21.748
- type: map_at_10
value: 34.626000000000005
- type: map_at_100
value: 35.813
- type: map_at_1000
value: 35.859
- type: map_at_3
value: 30.753000000000004
- type: map_at_5
value: 33.049
- type: mrr_at_1
value: 22.35
- type: mrr_at_10
value: 35.23
- type: mrr_at_100
value: 36.359
- type: mrr_at_1000
value: 36.399
- type: mrr_at_3
value: 31.436999999999998
- type: mrr_at_5
value: 33.687
- type: ndcg_at_1
value: 22.364
- type: ndcg_at_10
value: 41.677
- type: ndcg_at_100
value: 47.355999999999995
- type: ndcg_at_1000
value: 48.494
- type: ndcg_at_3
value: 33.85
- type: ndcg_at_5
value: 37.942
- type: precision_at_1
value: 22.364
- type: precision_at_10
value: 6.6000000000000005
- type: precision_at_100
value: 0.9450000000000001
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.527000000000001
- type: precision_at_5
value: 10.796999999999999
- type: recall_at_1
value: 21.748
- type: recall_at_10
value: 63.292
- type: recall_at_100
value: 89.427
- type: recall_at_1000
value: 98.13499999999999
- type: recall_at_3
value: 42.126000000000005
- type: recall_at_5
value: 51.968
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 92.62425900592795
- type: f1
value: 92.08497761553683
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 64.51436388508893
- type: f1
value: 45.884016531912906
- task:
type: Classification
dataset:
name: MTEB MasakhaNEWSClassification (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: accuracy
value: 76.57172995780591
- type: f1
value: 75.52979910878491
- task:
type: Clustering
dataset:
name: MTEB MasakhaNEWSClusteringP2P (eng)
type: masakhane/masakhanews
config: eng
split: test
revision: 8ccc72e69e65f40c70e117d8b3c08306bb788b60
metrics:
- type: v_measure
value: 44.84052695201612
- type: v_measure
value: 21.443971229936494
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 65.79354404841965
- type: f1
value: 63.17260074126185
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 71.09616677874916
- type: f1
value: 69.74285784421075
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 31.474709231086184
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 28.93630367824217
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 29.08234393834005
- type: mrr
value: 29.740466971605432
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: mteb/nfcorpus
config: default
split: test
revision: ec0fa4fe99da2ff19ca1214b7966684033a58814
metrics:
- type: map_at_1
value: 6.2059999999999995
- type: map_at_10
value: 14.442
- type: map_at_100
value: 18.005
- type: map_at_1000
value: 19.488
- type: map_at_3
value: 10.666
- type: map_at_5
value: 12.45
- type: mrr_at_1
value: 47.678
- type: mrr_at_10
value: 57.519
- type: mrr_at_100
value: 58.13700000000001
- type: mrr_at_1000
value: 58.167
- type: mrr_at_3
value: 55.779
- type: mrr_at_5
value: 56.940000000000005
- type: ndcg_at_1
value: 45.82
- type: ndcg_at_10
value: 37.651
- type: ndcg_at_100
value: 34.001999999999995
- type: ndcg_at_1000
value: 42.626
- type: ndcg_at_3
value: 43.961
- type: ndcg_at_5
value: 41.461
- type: precision_at_1
value: 47.678
- type: precision_at_10
value: 27.584999999999997
- type: precision_at_100
value: 8.455
- type: precision_at_1000
value: 2.118
- type: precision_at_3
value: 41.692
- type: precision_at_5
value: 36.161
- type: recall_at_1
value: 6.2059999999999995
- type: recall_at_10
value: 18.599
- type: recall_at_100
value: 33.608
- type: recall_at_1000
value: 65.429
- type: recall_at_3
value: 12.126000000000001
- type: recall_at_5
value: 14.902000000000001
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: mteb/nq
config: default
split: test
revision: b774495ed302d8c44a3a7ea25c90dbce03968f31
metrics:
- type: map_at_1
value: 39.117000000000004
- type: map_at_10
value: 55.535000000000004
- type: map_at_100
value: 56.32899999999999
- type: map_at_1000
value: 56.34400000000001
- type: map_at_3
value: 51.439
- type: map_at_5
value: 53.89699999999999
- type: mrr_at_1
value: 43.714
- type: mrr_at_10
value: 58.05200000000001
- type: mrr_at_100
value: 58.582
- type: mrr_at_1000
value: 58.592
- type: mrr_at_3
value: 54.896
- type: mrr_at_5
value: 56.874
- type: ndcg_at_1
value: 43.685
- type: ndcg_at_10
value: 63.108
- type: ndcg_at_100
value: 66.231
- type: ndcg_at_1000
value: 66.583
- type: ndcg_at_3
value: 55.659000000000006
- type: ndcg_at_5
value: 59.681
- type: precision_at_1
value: 43.685
- type: precision_at_10
value: 9.962
- type: precision_at_100
value: 1.174
- type: precision_at_1000
value: 0.121
- type: precision_at_3
value: 24.961
- type: precision_at_5
value: 17.352
- type: recall_at_1
value: 39.117000000000004
- type: recall_at_10
value: 83.408
- type: recall_at_100
value: 96.553
- type: recall_at_1000
value: 99.136
- type: recall_at_3
value: 64.364
- type: recall_at_5
value: 73.573
- task:
type: Classification
dataset:
name: MTEB NewsClassification
type: ag_news
config: default
split: test
revision: eb185aade064a813bc0b7f42de02595523103ca4
metrics:
- type: accuracy
value: 78.87763157894737
- type: f1
value: 78.69611753876177
- task:
type: PairClassification
dataset:
name: MTEB OpusparcusPC (en)
type: GEM/opusparcus
config: en
split: test
revision: 9e9b1f8ef51616073f47f306f7f47dd91663f86a
metrics:
- type: cos_sim_accuracy
value: 99.89816700610999
- type: cos_sim_ap
value: 100
- type: cos_sim_f1
value: 99.9490575649516
- type: cos_sim_precision
value: 100
- type: cos_sim_recall
value: 99.89816700610999
- type: dot_accuracy
value: 99.89816700610999
- type: dot_ap
value: 100
- type: dot_f1
value: 99.9490575649516
- type: dot_precision
value: 100
- type: dot_recall
value: 99.89816700610999
- type: euclidean_accuracy
value: 99.89816700610999
- type: euclidean_ap
value: 100
- type: euclidean_f1
value: 99.9490575649516
- type: euclidean_precision
value: 100
- type: euclidean_recall
value: 99.89816700610999
- type: manhattan_accuracy
value: 99.89816700610999
- type: manhattan_ap
value: 100
- type: manhattan_f1
value: 99.9490575649516
- type: manhattan_precision
value: 100
- type: manhattan_recall
value: 99.89816700610999
- type: max_accuracy
value: 99.89816700610999
- type: max_ap
value: 100
- type: max_f1
value: 99.9490575649516
- task:
type: PairClassification
dataset:
name: MTEB PawsX (en)
type: paws-x
config: en
split: test
revision: 8a04d940a42cd40658986fdd8e3da561533a3646
metrics:
- type: cos_sim_accuracy
value: 62
- type: cos_sim_ap
value: 62.26837791655737
- type: cos_sim_f1
value: 62.607449856733524
- type: cos_sim_precision
value: 46.36604774535809
- type: cos_sim_recall
value: 96.36163175303197
- type: dot_accuracy
value: 62
- type: dot_ap
value: 62.26736459439965
- type: dot_f1
value: 62.607449856733524
- type: dot_precision
value: 46.36604774535809
- type: dot_recall
value: 96.36163175303197
- type: euclidean_accuracy
value: 62
- type: euclidean_ap
value: 62.26826112548132
- type: euclidean_f1
value: 62.607449856733524
- type: euclidean_precision
value: 46.36604774535809
- type: euclidean_recall
value: 96.36163175303197
- type: manhattan_accuracy
value: 62
- type: manhattan_ap
value: 62.26223761507973
- type: manhattan_f1
value: 62.585034013605444
- type: manhattan_precision
value: 46.34146341463415
- type: manhattan_recall
value: 96.36163175303197
- type: max_accuracy
value: 62
- type: max_ap
value: 62.26837791655737
- type: max_f1
value: 62.607449856733524
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: mteb/quora
config: default
split: test
revision: e4e08e0b7dbe3c8700f0daef558ff32256715259
metrics:
- type: map_at_1
value: 69.90899999999999
- type: map_at_10
value: 83.56700000000001
- type: map_at_100
value: 84.19200000000001
- type: map_at_1000
value: 84.212
- type: map_at_3
value: 80.658
- type: map_at_5
value: 82.473
- type: mrr_at_1
value: 80.4
- type: mrr_at_10
value: 86.699
- type: mrr_at_100
value: 86.798
- type: mrr_at_1000
value: 86.80099999999999
- type: mrr_at_3
value: 85.677
- type: mrr_at_5
value: 86.354
- type: ndcg_at_1
value: 80.43
- type: ndcg_at_10
value: 87.41
- type: ndcg_at_100
value: 88.653
- type: ndcg_at_1000
value: 88.81599999999999
- type: ndcg_at_3
value: 84.516
- type: ndcg_at_5
value: 86.068
- type: precision_at_1
value: 80.43
- type: precision_at_10
value: 13.234000000000002
- type: precision_at_100
value: 1.513
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 36.93
- type: precision_at_5
value: 24.26
- type: recall_at_1
value: 69.90899999999999
- type: recall_at_10
value: 94.687
- type: recall_at_100
value: 98.96000000000001
- type: recall_at_1000
value: 99.79599999999999
- type: recall_at_3
value: 86.25699999999999
- type: recall_at_5
value: 90.70700000000001
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 46.02256865360266
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 385e3cb46b4cfa89021f56c4380204149d0efe33
metrics:
- type: v_measure
value: 62.43157528757563
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: mteb/scidocs
config: default
split: test
revision: f8c2fcf00f625baaa80f62ec5bd9e1fff3b8ae88
metrics:
- type: map_at_1
value: 5.093
- type: map_at_10
value: 12.982
- type: map_at_100
value: 15.031
- type: map_at_1000
value: 15.334
- type: map_at_3
value: 9.339
- type: map_at_5
value: 11.183
- type: mrr_at_1
value: 25.1
- type: mrr_at_10
value: 36.257
- type: mrr_at_100
value: 37.351
- type: mrr_at_1000
value: 37.409
- type: mrr_at_3
value: 33.050000000000004
- type: mrr_at_5
value: 35.205
- type: ndcg_at_1
value: 25.1
- type: ndcg_at_10
value: 21.361
- type: ndcg_at_100
value: 29.396
- type: ndcg_at_1000
value: 34.849999999999994
- type: ndcg_at_3
value: 20.704
- type: ndcg_at_5
value: 18.086
- type: precision_at_1
value: 25.1
- type: precision_at_10
value: 10.94
- type: precision_at_100
value: 2.257
- type: precision_at_1000
value: 0.358
- type: precision_at_3
value: 19.467000000000002
- type: precision_at_5
value: 15.98
- type: recall_at_1
value: 5.093
- type: recall_at_10
value: 22.177
- type: recall_at_100
value: 45.842
- type: recall_at_1000
value: 72.598
- type: recall_at_3
value: 11.833
- type: recall_at_5
value: 16.173000000000002
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: 20a6d6f312dd54037fe07a32d58e5e168867909d
metrics:
- type: cos_sim_pearson
value: 73.56535226754596
- type: cos_sim_spearman
value: 69.32425977603488
- type: euclidean_pearson
value: 71.32425703470898
- type: euclidean_spearman
value: 69.32425217267013
- type: manhattan_pearson
value: 71.25897281394246
- type: manhattan_spearman
value: 69.27132577049578
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 69.66387868726018
- type: cos_sim_spearman
value: 67.85470749045027
- type: euclidean_pearson
value: 66.62075098063795
- type: euclidean_spearman
value: 67.85470749045027
- type: manhattan_pearson
value: 66.61455061901262
- type: manhattan_spearman
value: 67.87229618498695
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 75.65731331392575
- type: cos_sim_spearman
value: 77.48991626780108
- type: euclidean_pearson
value: 77.19884738623692
- type: euclidean_spearman
value: 77.48985836619045
- type: manhattan_pearson
value: 77.0656684243772
- type: manhattan_spearman
value: 77.30289226582691
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 69.37003253666457
- type: cos_sim_spearman
value: 69.77157648098141
- type: euclidean_pearson
value: 69.39543876030432
- type: euclidean_spearman
value: 69.77157648098141
- type: manhattan_pearson
value: 69.29901600459745
- type: manhattan_spearman
value: 69.65074167527128
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 78.56777256540136
- type: cos_sim_spearman
value: 80.16458787843023
- type: euclidean_pearson
value: 80.16475730686916
- type: euclidean_spearman
value: 80.16458787843023
- type: manhattan_pearson
value: 80.12814463670401
- type: manhattan_spearman
value: 80.1357907984809
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 76.09572350919031
- type: cos_sim_spearman
value: 77.94490233429326
- type: euclidean_pearson
value: 78.36595251203524
- type: euclidean_spearman
value: 77.94490233429326
- type: manhattan_pearson
value: 78.41538768125166
- type: manhattan_spearman
value: 78.01244379569542
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 80.7843552187951
- type: cos_sim_spearman
value: 82.28085055047386
- type: euclidean_pearson
value: 82.37373672515267
- type: euclidean_spearman
value: 82.28085055047386
- type: manhattan_pearson
value: 82.39387241346917
- type: manhattan_spearman
value: 82.36503339515906
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: eea2b4fe26a775864c896887d910b76a8098ad3f
metrics:
- type: cos_sim_pearson
value: 68.29963929962095
- type: cos_sim_spearman
value: 67.96868942546051
- type: euclidean_pearson
value: 68.93524903869285
- type: euclidean_spearman
value: 67.96868942546051
- type: manhattan_pearson
value: 68.79144468444811
- type: manhattan_spearman
value: 67.69311483884324
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 72.84789696700685
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752300463038
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934248140928
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: STS
dataset:
name: MTEB STSBenchmarkMultilingualSTS (en)
type: PhilipMay/stsb_multi_mt
config: en
split: test
revision: 93d57ef91790589e3ce9c365164337a8a78b7632
metrics:
- type: cos_sim_pearson
value: 72.84789702519309
- type: cos_sim_spearman
value: 75.67875747588545
- type: euclidean_pearson
value: 75.07752310061133
- type: euclidean_spearman
value: 75.67875747588545
- type: manhattan_pearson
value: 74.97934257159595
- type: manhattan_spearman
value: 75.62525644178724
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 81.55557720431086
- type: mrr
value: 94.91178665198272
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: mteb/scifact
config: default
split: test
revision: 0228b52cf27578f30900b9e5271d331663a030d7
metrics:
- type: map_at_1
value: 59.260999999999996
- type: map_at_10
value: 69.36099999999999
- type: map_at_100
value: 69.868
- type: map_at_1000
value: 69.877
- type: map_at_3
value: 66.617
- type: map_at_5
value: 68.061
- type: mrr_at_1
value: 62.333000000000006
- type: mrr_at_10
value: 70.533
- type: mrr_at_100
value: 70.966
- type: mrr_at_1000
value: 70.975
- type: mrr_at_3
value: 68.667
- type: mrr_at_5
value: 69.717
- type: ndcg_at_1
value: 62.333000000000006
- type: ndcg_at_10
value: 73.82300000000001
- type: ndcg_at_100
value: 76.122
- type: ndcg_at_1000
value: 76.374
- type: ndcg_at_3
value: 69.27499999999999
- type: ndcg_at_5
value: 71.33
- type: precision_at_1
value: 62.333000000000006
- type: precision_at_10
value: 9.8
- type: precision_at_100
value: 1.097
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 26.889000000000003
- type: precision_at_5
value: 17.599999999999998
- type: recall_at_1
value: 59.260999999999996
- type: recall_at_10
value: 86.2
- type: recall_at_100
value: 96.667
- type: recall_at_1000
value: 98.667
- type: recall_at_3
value: 74.006
- type: recall_at_5
value: 79.167
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.81881188118813
- type: cos_sim_ap
value: 95.20169041096409
- type: cos_sim_f1
value: 90.76224129227664
- type: cos_sim_precision
value: 91.64118246687055
- type: cos_sim_recall
value: 89.9
- type: dot_accuracy
value: 99.81881188118813
- type: dot_ap
value: 95.20169041096409
- type: dot_f1
value: 90.76224129227664
- type: dot_precision
value: 91.64118246687055
- type: dot_recall
value: 89.9
- type: euclidean_accuracy
value: 99.81881188118813
- type: euclidean_ap
value: 95.2016904109641
- type: euclidean_f1
value: 90.76224129227664
- type: euclidean_precision
value: 91.64118246687055
- type: euclidean_recall
value: 89.9
- type: manhattan_accuracy
value: 99.81881188118813
- type: manhattan_ap
value: 95.22680188132777
- type: manhattan_f1
value: 90.79013588324108
- type: manhattan_precision
value: 91.38804457953394
- type: manhattan_recall
value: 90.2
- type: max_accuracy
value: 99.81881188118813
- type: max_ap
value: 95.22680188132777
- type: max_f1
value: 90.79013588324108
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 57.8638628701308
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 37.82028248106046
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.870860210170946
- type: mrr
value: 51.608084521687466
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 31.60384207444685
- type: cos_sim_spearman
value: 30.84047452209471
- type: dot_pearson
value: 31.60384104417333
- type: dot_spearman
value: 30.84047452209471
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: mteb/trec-covid
config: default
split: test
revision: bb9466bac8153a0349341eb1b22e06409e78ef4e
metrics:
- type: map_at_1
value: 0.246
- type: map_at_10
value: 2.051
- type: map_at_100
value: 13.129
- type: map_at_1000
value: 31.56
- type: map_at_3
value: 0.681
- type: map_at_5
value: 1.105
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 87
- type: ndcg_at_10
value: 80.716
- type: ndcg_at_100
value: 63.83
- type: ndcg_at_1000
value: 56.215
- type: ndcg_at_3
value: 84.531
- type: ndcg_at_5
value: 84.777
- type: precision_at_1
value: 94
- type: precision_at_10
value: 84.6
- type: precision_at_100
value: 66.03999999999999
- type: precision_at_1000
value: 24.878
- type: precision_at_3
value: 88.667
- type: precision_at_5
value: 89.60000000000001
- type: recall_at_1
value: 0.246
- type: recall_at_10
value: 2.2079999999999997
- type: recall_at_100
value: 15.895999999999999
- type: recall_at_1000
value: 52.683
- type: recall_at_3
value: 0.7040000000000001
- type: recall_at_5
value: 1.163
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: mteb/touche2020
config: default
split: test
revision: a34f9a33db75fa0cbb21bb5cfc3dae8dc8bec93f
metrics:
- type: map_at_1
value: 3.852
- type: map_at_10
value: 14.316
- type: map_at_100
value: 20.982
- type: map_at_1000
value: 22.58
- type: map_at_3
value: 7.767
- type: map_at_5
value: 10.321
- type: mrr_at_1
value: 51.019999999999996
- type: mrr_at_10
value: 66.365
- type: mrr_at_100
value: 66.522
- type: mrr_at_1000
value: 66.522
- type: mrr_at_3
value: 62.925
- type: mrr_at_5
value: 64.762
- type: ndcg_at_1
value: 46.939
- type: ndcg_at_10
value: 34.516999999999996
- type: ndcg_at_100
value: 44.25
- type: ndcg_at_1000
value: 54.899
- type: ndcg_at_3
value: 40.203
- type: ndcg_at_5
value: 37.004
- type: precision_at_1
value: 51.019999999999996
- type: precision_at_10
value: 29.796
- type: precision_at_100
value: 8.633000000000001
- type: precision_at_1000
value: 1.584
- type: precision_at_3
value: 40.816
- type: precision_at_5
value: 35.918
- type: recall_at_1
value: 3.852
- type: recall_at_10
value: 20.891000000000002
- type: recall_at_100
value: 52.428
- type: recall_at_1000
value: 84.34899999999999
- type: recall_at_3
value: 8.834
- type: recall_at_5
value: 12.909
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: edfaf9da55d3dd50d43143d90c1ac476895ae6de
metrics:
- type: accuracy
value: 64.7092
- type: ap
value: 11.972915012305819
- type: f1
value: 49.91050149892115
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 56.737408036219584
- type: f1
value: 57.07235266246011
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 35.9147539025798
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 82.52369315133814
- type: cos_sim_ap
value: 62.34858091376534
- type: cos_sim_f1
value: 58.18225190839694
- type: cos_sim_precision
value: 53.09098824553766
- type: cos_sim_recall
value: 64.35356200527704
- type: dot_accuracy
value: 82.52369315133814
- type: dot_ap
value: 62.34857753814992
- type: dot_f1
value: 58.18225190839694
- type: dot_precision
value: 53.09098824553766
- type: dot_recall
value: 64.35356200527704
- type: euclidean_accuracy
value: 82.52369315133814
- type: euclidean_ap
value: 62.34857756663386
- type: euclidean_f1
value: 58.18225190839694
- type: euclidean_precision
value: 53.09098824553766
- type: euclidean_recall
value: 64.35356200527704
- type: manhattan_accuracy
value: 82.49389044525243
- type: manhattan_ap
value: 62.32245347238179
- type: manhattan_f1
value: 58.206309819213054
- type: manhattan_precision
value: 52.70704044511021
- type: manhattan_recall
value: 64.9868073878628
- type: max_accuracy
value: 82.52369315133814
- type: max_ap
value: 62.34858091376534
- type: max_f1
value: 58.206309819213054
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 88.34555827220863
- type: cos_sim_ap
value: 84.84152481680071
- type: cos_sim_f1
value: 76.860456739428
- type: cos_sim_precision
value: 72.21470150263978
- type: cos_sim_recall
value: 82.14505697566985
- type: dot_accuracy
value: 88.34555827220863
- type: dot_ap
value: 84.84152743322608
- type: dot_f1
value: 76.860456739428
- type: dot_precision
value: 72.21470150263978
- type: dot_recall
value: 82.14505697566985
- type: euclidean_accuracy
value: 88.34555827220863
- type: euclidean_ap
value: 84.84152589453169
- type: euclidean_f1
value: 76.860456739428
- type: euclidean_precision
value: 72.21470150263978
- type: euclidean_recall
value: 82.14505697566985
- type: manhattan_accuracy
value: 88.38242713548337
- type: manhattan_ap
value: 84.8112124970968
- type: manhattan_f1
value: 76.83599206057487
- type: manhattan_precision
value: 73.51244900829934
- type: manhattan_recall
value: 80.47428395441946
- type: max_accuracy
value: 88.38242713548337
- type: max_ap
value: 84.84152743322608
- type: max_f1
value: 76.860456739428
- task:
type: Clustering
dataset:
name: MTEB WikiCitiesClustering
type: jinaai/cities_wiki_clustering
config: default
split: test
revision: ddc9ee9242fa65332597f70e967ecc38b9d734fa
metrics:
- type: v_measure
value: 85.5314389263015
---
# radia/snowflake-arctic-embed-l-Q4_K_M-GGUF
This model was converted to GGUF format from [`Snowflake/snowflake-arctic-embed-l`](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama --hf-repo radia/snowflake-arctic-embed-l-Q4_K_M-GGUF --hf-file snowflake-arctic-embed-l-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo radia/snowflake-arctic-embed-l-Q4_K_M-GGUF --hf-file snowflake-arctic-embed-l-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./main --hf-repo radia/snowflake-arctic-embed-l-Q4_K_M-GGUF --hf-file snowflake-arctic-embed-l-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./server --hf-repo radia/snowflake-arctic-embed-l-Q4_K_M-GGUF --hf-file snowflake-arctic-embed-l-q4_k_m.gguf -c 2048
```
| [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
BASF-AI/nomic-embed-text-v1.5 | BASF-AI | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"nomic_bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"custom_code",
"en",
"arxiv:2205.13147",
"arxiv:2402.01613",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 2025-01-10T00:04:58 | 2025-01-10T04:53:06 | 31 | 0 | ---
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
model-index:
- name: epoch_0_model
results:
- task:
type: Classification
dataset:
name: MTEB AmazonCounterfactualClassification (en)
type: mteb/amazon_counterfactual
config: en
split: test
revision: e8379541af4e31359cca9fbcf4b00f2671dba205
metrics:
- type: accuracy
value: 75.20895522388058
- type: ap
value: 38.57605549557802
- type: f1
value: 69.35586565857854
- task:
type: Classification
dataset:
name: MTEB AmazonPolarityClassification
type: mteb/amazon_polarity
config: default
split: test
revision: e2d317d38cd51312af73b3d32a06d1a08b442046
metrics:
- type: accuracy
value: 91.8144
- type: ap
value: 88.65222882032363
- type: f1
value: 91.80426301643274
- task:
type: Classification
dataset:
name: MTEB AmazonReviewsClassification (en)
type: mteb/amazon_reviews_multi
config: en
split: test
revision: 1399c76144fd37290681b995c656ef9b2e06e26d
metrics:
- type: accuracy
value: 47.162000000000006
- type: f1
value: 46.59329642263158
- task:
type: Retrieval
dataset:
name: MTEB ArguAna
type: arguana
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 24.253
- type: map_at_10
value: 38.962
- type: map_at_100
value: 40.081
- type: map_at_1000
value: 40.089000000000006
- type: map_at_3
value: 33.499
- type: map_at_5
value: 36.351
- type: mrr_at_1
value: 24.609
- type: mrr_at_10
value: 39.099000000000004
- type: mrr_at_100
value: 40.211000000000006
- type: mrr_at_1000
value: 40.219
- type: mrr_at_3
value: 33.677
- type: mrr_at_5
value: 36.469
- type: ndcg_at_1
value: 24.253
- type: ndcg_at_10
value: 48.010999999999996
- type: ndcg_at_100
value: 52.756
- type: ndcg_at_1000
value: 52.964999999999996
- type: ndcg_at_3
value: 36.564
- type: ndcg_at_5
value: 41.711999999999996
- type: precision_at_1
value: 24.253
- type: precision_at_10
value: 7.738
- type: precision_at_100
value: 0.98
- type: precision_at_1000
value: 0.1
- type: precision_at_3
value: 15.149000000000001
- type: precision_at_5
value: 11.593
- type: recall_at_1
value: 24.253
- type: recall_at_10
value: 77.383
- type: recall_at_100
value: 98.009
- type: recall_at_1000
value: 99.644
- type: recall_at_3
value: 45.448
- type: recall_at_5
value: 57.965999999999994
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringP2P
type: mteb/arxiv-clustering-p2p
config: default
split: test
revision: a122ad7f3f0291bf49cc6f4d32aa80929df69d5d
metrics:
- type: v_measure
value: 45.69069567851087
- task:
type: Clustering
dataset:
name: MTEB ArxivClusteringS2S
type: mteb/arxiv-clustering-s2s
config: default
split: test
revision: f910caf1a6075f7329cdf8c1a6135696f37dbd53
metrics:
- type: v_measure
value: 36.35185490976283
- task:
type: Reranking
dataset:
name: MTEB AskUbuntuDupQuestions
type: mteb/askubuntudupquestions-reranking
config: default
split: test
revision: 2000358ca161889fa9c082cb41daa8dcfb161a54
metrics:
- type: map
value: 61.71274951450321
- type: mrr
value: 76.06032625423207
- task:
type: STS
dataset:
name: MTEB BIOSSES
type: mteb/biosses-sts
config: default
split: test
revision: d3fb88f8f02e40887cd149695127462bbcf29b4a
metrics:
- type: cos_sim_pearson
value: 86.73980520022269
- type: cos_sim_spearman
value: 84.24649792685918
- type: euclidean_pearson
value: 85.85197641158186
- type: euclidean_spearman
value: 84.24649792685918
- type: manhattan_pearson
value: 86.26809552711346
- type: manhattan_spearman
value: 84.56397504030865
- task:
type: Classification
dataset:
name: MTEB Banking77Classification
type: mteb/banking77
config: default
split: test
revision: 0fd18e25b25c072e09e0d92ab615fda904d66300
metrics:
- type: accuracy
value: 84.25324675324674
- type: f1
value: 84.17872280892557
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringP2P
type: mteb/biorxiv-clustering-p2p
config: default
split: test
revision: 65b79d1d13f80053f67aca9498d9402c2d9f1f40
metrics:
- type: v_measure
value: 38.770253446400886
- task:
type: Clustering
dataset:
name: MTEB BiorxivClusteringS2S
type: mteb/biorxiv-clustering-s2s
config: default
split: test
revision: 258694dd0231531bc1fd9de6ceb52a0853c6d908
metrics:
- type: v_measure
value: 32.94307095497281
- task:
type: Retrieval
dataset:
name: MTEB CQADupstackAndroidRetrieval
type: BeIR/cqadupstack
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 32.164
- type: map_at_10
value: 42.641
- type: map_at_100
value: 43.947
- type: map_at_1000
value: 44.074999999999996
- type: map_at_3
value: 39.592
- type: map_at_5
value: 41.204
- type: mrr_at_1
value: 39.628
- type: mrr_at_10
value: 48.625
- type: mrr_at_100
value: 49.368
- type: mrr_at_1000
value: 49.413000000000004
- type: mrr_at_3
value: 46.400000000000006
- type: mrr_at_5
value: 47.68
- type: ndcg_at_1
value: 39.628
- type: ndcg_at_10
value: 48.564
- type: ndcg_at_100
value: 53.507000000000005
- type: ndcg_at_1000
value: 55.635999999999996
- type: ndcg_at_3
value: 44.471
- type: ndcg_at_5
value: 46.137
- type: precision_at_1
value: 39.628
- type: precision_at_10
value: 8.856
- type: precision_at_100
value: 1.429
- type: precision_at_1000
value: 0.191
- type: precision_at_3
value: 21.268
- type: precision_at_5
value: 14.649000000000001
- type: recall_at_1
value: 32.164
- type: recall_at_10
value: 59.609
- type: recall_at_100
value: 80.521
- type: recall_at_1000
value: 94.245
- type: recall_at_3
value: 46.521
- type: recall_at_5
value: 52.083999999999996
- type: map_at_1
value: 31.526
- type: map_at_10
value: 41.581
- type: map_at_100
value: 42.815999999999995
- type: map_at_1000
value: 42.936
- type: map_at_3
value: 38.605000000000004
- type: map_at_5
value: 40.351
- type: mrr_at_1
value: 39.489999999999995
- type: mrr_at_10
value: 47.829
- type: mrr_at_100
value: 48.512
- type: mrr_at_1000
value: 48.552
- type: mrr_at_3
value: 45.754
- type: mrr_at_5
value: 46.986
- type: ndcg_at_1
value: 39.489999999999995
- type: ndcg_at_10
value: 47.269
- type: ndcg_at_100
value: 51.564
- type: ndcg_at_1000
value: 53.53099999999999
- type: ndcg_at_3
value: 43.301
- type: ndcg_at_5
value: 45.239000000000004
- type: precision_at_1
value: 39.489999999999995
- type: precision_at_10
value: 8.93
- type: precision_at_100
value: 1.415
- type: precision_at_1000
value: 0.188
- type: precision_at_3
value: 20.892
- type: precision_at_5
value: 14.865999999999998
- type: recall_at_1
value: 31.526
- type: recall_at_10
value: 56.76
- type: recall_at_100
value: 75.029
- type: recall_at_1000
value: 87.491
- type: recall_at_3
value: 44.786
- type: recall_at_5
value: 50.254
- type: map_at_1
value: 40.987
- type: map_at_10
value: 52.827
- type: map_at_100
value: 53.751000000000005
- type: map_at_1000
value: 53.81
- type: map_at_3
value: 49.844
- type: map_at_5
value: 51.473
- type: mrr_at_1
value: 46.833999999999996
- type: mrr_at_10
value: 56.389
- type: mrr_at_100
value: 57.003
- type: mrr_at_1000
value: 57.034
- type: mrr_at_3
value: 54.17999999999999
- type: mrr_at_5
value: 55.486999999999995
- type: ndcg_at_1
value: 46.833999999999996
- type: ndcg_at_10
value: 58.372
- type: ndcg_at_100
value: 62.068
- type: ndcg_at_1000
value: 63.288
- type: ndcg_at_3
value: 53.400000000000006
- type: ndcg_at_5
value: 55.766000000000005
- type: precision_at_1
value: 46.833999999999996
- type: precision_at_10
value: 9.191
- type: precision_at_100
value: 1.192
- type: precision_at_1000
value: 0.134
- type: precision_at_3
value: 23.448
- type: precision_at_5
value: 15.862000000000002
- type: recall_at_1
value: 40.987
- type: recall_at_10
value: 71.146
- type: recall_at_100
value: 87.035
- type: recall_at_1000
value: 95.633
- type: recall_at_3
value: 58.025999999999996
- type: recall_at_5
value: 63.815999999999995
- type: map_at_1
value: 24.587
- type: map_at_10
value: 33.114
- type: map_at_100
value: 34.043
- type: map_at_1000
value: 34.123999999999995
- type: map_at_3
value: 30.45
- type: map_at_5
value: 31.813999999999997
- type: mrr_at_1
value: 26.554
- type: mrr_at_10
value: 35.148
- type: mrr_at_100
value: 35.926
- type: mrr_at_1000
value: 35.991
- type: mrr_at_3
value: 32.599000000000004
- type: mrr_at_5
value: 33.893
- type: ndcg_at_1
value: 26.554
- type: ndcg_at_10
value: 38.132
- type: ndcg_at_100
value: 42.78
- type: ndcg_at_1000
value: 44.919
- type: ndcg_at_3
value: 32.833
- type: ndcg_at_5
value: 35.168
- type: precision_at_1
value: 26.554
- type: precision_at_10
value: 5.921
- type: precision_at_100
value: 0.8659999999999999
- type: precision_at_1000
value: 0.109
- type: precision_at_3
value: 13.861
- type: precision_at_5
value: 9.605
- type: recall_at_1
value: 24.587
- type: recall_at_10
value: 51.690000000000005
- type: recall_at_100
value: 73.428
- type: recall_at_1000
value: 89.551
- type: recall_at_3
value: 37.336999999999996
- type: recall_at_5
value: 43.047000000000004
- type: map_at_1
value: 16.715
- type: map_at_10
value: 24.251
- type: map_at_100
value: 25.326999999999998
- type: map_at_1000
value: 25.455
- type: map_at_3
value: 21.912000000000003
- type: map_at_5
value: 23.257
- type: mrr_at_1
value: 20.274
- type: mrr_at_10
value: 28.552
- type: mrr_at_100
value: 29.42
- type: mrr_at_1000
value: 29.497
- type: mrr_at_3
value: 26.14
- type: mrr_at_5
value: 27.502
- type: ndcg_at_1
value: 20.274
- type: ndcg_at_10
value: 29.088
- type: ndcg_at_100
value: 34.293
- type: ndcg_at_1000
value: 37.271
- type: ndcg_at_3
value: 24.708
- type: ndcg_at_5
value: 26.809
- type: precision_at_1
value: 20.274
- type: precision_at_10
value: 5.361
- type: precision_at_100
value: 0.915
- type: precision_at_1000
value: 0.13
- type: precision_at_3
value: 11.733
- type: precision_at_5
value: 8.556999999999999
- type: recall_at_1
value: 16.715
- type: recall_at_10
value: 39.587
- type: recall_at_100
value: 62.336000000000006
- type: recall_at_1000
value: 83.453
- type: recall_at_3
value: 27.839999999999996
- type: recall_at_5
value: 32.952999999999996
- type: map_at_1
value: 28.793000000000003
- type: map_at_10
value: 38.582
- type: map_at_100
value: 39.881
- type: map_at_1000
value: 39.987
- type: map_at_3
value: 35.851
- type: map_at_5
value: 37.289
- type: mrr_at_1
value: 34.455999999999996
- type: mrr_at_10
value: 43.909
- type: mrr_at_100
value: 44.74
- type: mrr_at_1000
value: 44.786
- type: mrr_at_3
value: 41.659
- type: mrr_at_5
value: 43.010999999999996
- type: ndcg_at_1
value: 34.455999999999996
- type: ndcg_at_10
value: 44.266
- type: ndcg_at_100
value: 49.639
- type: ndcg_at_1000
value: 51.644
- type: ndcg_at_3
value: 39.865
- type: ndcg_at_5
value: 41.887
- type: precision_at_1
value: 34.455999999999996
- type: precision_at_10
value: 7.843999999999999
- type: precision_at_100
value: 1.243
- type: precision_at_1000
value: 0.158
- type: precision_at_3
value: 18.831999999999997
- type: precision_at_5
value: 13.147
- type: recall_at_1
value: 28.793000000000003
- type: recall_at_10
value: 55.68300000000001
- type: recall_at_100
value: 77.99000000000001
- type: recall_at_1000
value: 91.183
- type: recall_at_3
value: 43.293
- type: recall_at_5
value: 48.618
- type: map_at_1
value: 25.907000000000004
- type: map_at_10
value: 35.519
- type: map_at_100
value: 36.806
- type: map_at_1000
value: 36.912
- type: map_at_3
value: 32.748
- type: map_at_5
value: 34.232
- type: mrr_at_1
value: 31.621
- type: mrr_at_10
value: 40.687
- type: mrr_at_100
value: 41.583
- type: mrr_at_1000
value: 41.638999999999996
- type: mrr_at_3
value: 38.527
- type: mrr_at_5
value: 39.612
- type: ndcg_at_1
value: 31.621
- type: ndcg_at_10
value: 41.003
- type: ndcg_at_100
value: 46.617999999999995
- type: ndcg_at_1000
value: 48.82
- type: ndcg_at_3
value: 36.542
- type: ndcg_at_5
value: 38.368
- type: precision_at_1
value: 31.621
- type: precision_at_10
value: 7.396999999999999
- type: precision_at_100
value: 1.191
- type: precision_at_1000
value: 0.153
- type: precision_at_3
value: 17.39
- type: precision_at_5
value: 12.1
- type: recall_at_1
value: 25.907000000000004
- type: recall_at_10
value: 52.115
- type: recall_at_100
value: 76.238
- type: recall_at_1000
value: 91.218
- type: recall_at_3
value: 39.417
- type: recall_at_5
value: 44.435
- type: map_at_1
value: 25.732166666666668
- type: map_at_10
value: 34.51616666666667
- type: map_at_100
value: 35.67241666666666
- type: map_at_1000
value: 35.78675
- type: map_at_3
value: 31.953416666666662
- type: map_at_5
value: 33.333
- type: mrr_at_1
value: 30.300166666666673
- type: mrr_at_10
value: 38.6255
- type: mrr_at_100
value: 39.46183333333334
- type: mrr_at_1000
value: 39.519999999999996
- type: mrr_at_3
value: 36.41299999999999
- type: mrr_at_5
value: 37.6365
- type: ndcg_at_1
value: 30.300166666666673
- type: ndcg_at_10
value: 39.61466666666667
- type: ndcg_at_100
value: 44.60808333333334
- type: ndcg_at_1000
value: 46.91708333333334
- type: ndcg_at_3
value: 35.26558333333333
- type: ndcg_at_5
value: 37.220000000000006
- type: precision_at_1
value: 30.300166666666673
- type: precision_at_10
value: 6.837416666666667
- type: precision_at_100
value: 1.10425
- type: precision_at_1000
value: 0.14875
- type: precision_at_3
value: 16.13716666666667
- type: precision_at_5
value: 11.2815
- type: recall_at_1
value: 25.732166666666668
- type: recall_at_10
value: 50.578916666666665
- type: recall_at_100
value: 72.42183333333334
- type: recall_at_1000
value: 88.48766666666667
- type: recall_at_3
value: 38.41325
- type: recall_at_5
value: 43.515750000000004
- type: map_at_1
value: 23.951
- type: map_at_10
value: 30.974
- type: map_at_100
value: 31.804
- type: map_at_1000
value: 31.900000000000002
- type: map_at_3
value: 28.762
- type: map_at_5
value: 29.94
- type: mrr_at_1
value: 26.534000000000002
- type: mrr_at_10
value: 33.553
- type: mrr_at_100
value: 34.297
- type: mrr_at_1000
value: 34.36
- type: mrr_at_3
value: 31.391000000000002
- type: mrr_at_5
value: 32.525999999999996
- type: ndcg_at_1
value: 26.534000000000002
- type: ndcg_at_10
value: 35.112
- type: ndcg_at_100
value: 39.28
- type: ndcg_at_1000
value: 41.723
- type: ndcg_at_3
value: 30.902
- type: ndcg_at_5
value: 32.759
- type: precision_at_1
value: 26.534000000000002
- type: precision_at_10
value: 5.445
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11
- type: precision_at_3
value: 12.986
- type: precision_at_5
value: 9.049
- type: recall_at_1
value: 23.951
- type: recall_at_10
value: 45.24
- type: recall_at_100
value: 64.12299999999999
- type: recall_at_1000
value: 82.28999999999999
- type: recall_at_3
value: 33.806000000000004
- type: recall_at_5
value: 38.277
- type: map_at_1
value: 16.829
- type: map_at_10
value: 23.684
- type: map_at_100
value: 24.683
- type: map_at_1000
value: 24.81
- type: map_at_3
value: 21.554000000000002
- type: map_at_5
value: 22.768
- type: mrr_at_1
value: 20.096
- type: mrr_at_10
value: 27.230999999999998
- type: mrr_at_100
value: 28.083999999999996
- type: mrr_at_1000
value: 28.166000000000004
- type: mrr_at_3
value: 25.212
- type: mrr_at_5
value: 26.32
- type: ndcg_at_1
value: 20.096
- type: ndcg_at_10
value: 27.989000000000004
- type: ndcg_at_100
value: 32.847
- type: ndcg_at_1000
value: 35.896
- type: ndcg_at_3
value: 24.116
- type: ndcg_at_5
value: 25.964
- type: precision_at_1
value: 20.096
- type: precision_at_10
value: 5
- type: precision_at_100
value: 0.8750000000000001
- type: precision_at_1000
value: 0.131
- type: precision_at_3
value: 11.207
- type: precision_at_5
value: 8.08
- type: recall_at_1
value: 16.829
- type: recall_at_10
value: 37.407000000000004
- type: recall_at_100
value: 59.101000000000006
- type: recall_at_1000
value: 81.024
- type: recall_at_3
value: 26.739
- type: recall_at_5
value: 31.524
- type: map_at_1
value: 24.138
- type: map_at_10
value: 32.275999999999996
- type: map_at_100
value: 33.416000000000004
- type: map_at_1000
value: 33.527
- type: map_at_3
value: 29.854000000000003
- type: map_at_5
value: 31.096
- type: mrr_at_1
value: 28.450999999999997
- type: mrr_at_10
value: 36.214
- type: mrr_at_100
value: 37.134
- type: mrr_at_1000
value: 37.198
- type: mrr_at_3
value: 34.001999999999995
- type: mrr_at_5
value: 35.187000000000005
- type: ndcg_at_1
value: 28.450999999999997
- type: ndcg_at_10
value: 37.166
- type: ndcg_at_100
value: 42.454
- type: ndcg_at_1000
value: 44.976
- type: ndcg_at_3
value: 32.796
- type: ndcg_at_5
value: 34.631
- type: precision_at_1
value: 28.450999999999997
- type: precision_at_10
value: 6.241
- type: precision_at_100
value: 0.9950000000000001
- type: precision_at_1000
value: 0.133
- type: precision_at_3
value: 14.801
- type: precision_at_5
value: 10.280000000000001
- type: recall_at_1
value: 24.138
- type: recall_at_10
value: 48.111
- type: recall_at_100
value: 71.245
- type: recall_at_1000
value: 88.986
- type: recall_at_3
value: 36.119
- type: recall_at_5
value: 40.846
- type: map_at_1
value: 23.244
- type: map_at_10
value: 31.227
- type: map_at_100
value: 33.007
- type: map_at_1000
value: 33.223
- type: map_at_3
value: 28.924
- type: map_at_5
value: 30.017
- type: mrr_at_1
value: 27.668
- type: mrr_at_10
value: 35.524
- type: mrr_at_100
value: 36.699
- type: mrr_at_1000
value: 36.759
- type: mrr_at_3
value: 33.366
- type: mrr_at_5
value: 34.552
- type: ndcg_at_1
value: 27.668
- type: ndcg_at_10
value: 36.381
- type: ndcg_at_100
value: 43.062
- type: ndcg_at_1000
value: 45.656
- type: ndcg_at_3
value: 32.501999999999995
- type: ndcg_at_5
value: 34.105999999999995
- type: precision_at_1
value: 27.668
- type: precision_at_10
value: 6.798
- type: precision_at_100
value: 1.492
- type: precision_at_1000
value: 0.234
- type: precision_at_3
value: 15.152
- type: precision_at_5
value: 10.791
- type: recall_at_1
value: 23.244
- type: recall_at_10
value: 45.979
- type: recall_at_100
value: 74.822
- type: recall_at_1000
value: 91.078
- type: recall_at_3
value: 34.925
- type: recall_at_5
value: 39.126
- type: map_at_1
value: 19.945
- type: map_at_10
value: 27.517999999999997
- type: map_at_100
value: 28.588
- type: map_at_1000
value: 28.682000000000002
- type: map_at_3
value: 25.345000000000002
- type: map_at_5
value: 26.555
- type: mrr_at_1
value: 21.996
- type: mrr_at_10
value: 29.845
- type: mrr_at_100
value: 30.775999999999996
- type: mrr_at_1000
value: 30.845
- type: mrr_at_3
value: 27.726
- type: mrr_at_5
value: 28.882
- type: ndcg_at_1
value: 21.996
- type: ndcg_at_10
value: 32.034
- type: ndcg_at_100
value: 37.185
- type: ndcg_at_1000
value: 39.645
- type: ndcg_at_3
value: 27.750999999999998
- type: ndcg_at_5
value: 29.805999999999997
- type: precision_at_1
value: 21.996
- type: precision_at_10
value: 5.065
- type: precision_at_100
value: 0.819
- type: precision_at_1000
value: 0.11399999999999999
- type: precision_at_3
value: 12.076
- type: precision_at_5
value: 8.392
- type: recall_at_1
value: 19.945
- type: recall_at_10
value: 43.62
- type: recall_at_100
value: 67.194
- type: recall_at_1000
value: 85.7
- type: recall_at_3
value: 32.15
- type: recall_at_5
value: 37.208999999999996
- task:
type: Retrieval
dataset:
name: MTEB ClimateFEVER
type: climate-fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.279
- type: map_at_10
value: 31.052999999999997
- type: map_at_100
value: 33.125
- type: map_at_1000
value: 33.306000000000004
- type: map_at_3
value: 26.208
- type: map_at_5
value: 28.857
- type: mrr_at_1
value: 42.671
- type: mrr_at_10
value: 54.557
- type: mrr_at_100
value: 55.142
- type: mrr_at_1000
value: 55.169000000000004
- type: mrr_at_3
value: 51.488
- type: mrr_at_5
value: 53.439
- type: ndcg_at_1
value: 42.671
- type: ndcg_at_10
value: 41.276
- type: ndcg_at_100
value: 48.376000000000005
- type: ndcg_at_1000
value: 51.318
- type: ndcg_at_3
value: 35.068
- type: ndcg_at_5
value: 37.242
- type: precision_at_1
value: 42.671
- type: precision_at_10
value: 12.638
- type: precision_at_100
value: 2.045
- type: precision_at_1000
value: 0.26
- type: precision_at_3
value: 26.08
- type: precision_at_5
value: 19.805
- type: recall_at_1
value: 18.279
- type: recall_at_10
value: 46.946
- type: recall_at_100
value: 70.97200000000001
- type: recall_at_1000
value: 87.107
- type: recall_at_3
value: 31.147999999999996
- type: recall_at_5
value: 38.099
- task:
type: Retrieval
dataset:
name: MTEB DBPedia
type: dbpedia-entity
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 8.573
- type: map_at_10
value: 19.747
- type: map_at_100
value: 28.205000000000002
- type: map_at_1000
value: 29.831000000000003
- type: map_at_3
value: 14.109
- type: map_at_5
value: 16.448999999999998
- type: mrr_at_1
value: 71
- type: mrr_at_10
value: 77.68599999999999
- type: mrr_at_100
value: 77.995
- type: mrr_at_1000
value: 78.00200000000001
- type: mrr_at_3
value: 76.292
- type: mrr_at_5
value: 77.029
- type: ndcg_at_1
value: 59.12500000000001
- type: ndcg_at_10
value: 43.9
- type: ndcg_at_100
value: 47.863
- type: ndcg_at_1000
value: 54.848
- type: ndcg_at_3
value: 49.803999999999995
- type: ndcg_at_5
value: 46.317
- type: precision_at_1
value: 71
- type: precision_at_10
value: 34.4
- type: precision_at_100
value: 11.063
- type: precision_at_1000
value: 1.989
- type: precision_at_3
value: 52.333
- type: precision_at_5
value: 43.7
- type: recall_at_1
value: 8.573
- type: recall_at_10
value: 25.615
- type: recall_at_100
value: 53.385000000000005
- type: recall_at_1000
value: 75.46000000000001
- type: recall_at_3
value: 15.429
- type: recall_at_5
value: 19.357
- task:
type: Classification
dataset:
name: MTEB EmotionClassification
type: mteb/emotion
config: default
split: test
revision: 4f58c6b202a23cf9a4da393831edf4f9183cad37
metrics:
- type: accuracy
value: 47.989999999999995
- type: f1
value: 42.776314451497555
- task:
type: Retrieval
dataset:
name: MTEB FEVER
type: fever
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 74.13499999999999
- type: map_at_10
value: 82.825
- type: map_at_100
value: 83.096
- type: map_at_1000
value: 83.111
- type: map_at_3
value: 81.748
- type: map_at_5
value: 82.446
- type: mrr_at_1
value: 79.553
- type: mrr_at_10
value: 86.654
- type: mrr_at_100
value: 86.774
- type: mrr_at_1000
value: 86.778
- type: mrr_at_3
value: 85.981
- type: mrr_at_5
value: 86.462
- type: ndcg_at_1
value: 79.553
- type: ndcg_at_10
value: 86.345
- type: ndcg_at_100
value: 87.32
- type: ndcg_at_1000
value: 87.58200000000001
- type: ndcg_at_3
value: 84.719
- type: ndcg_at_5
value: 85.677
- type: precision_at_1
value: 79.553
- type: precision_at_10
value: 10.402000000000001
- type: precision_at_100
value: 1.1119999999999999
- type: precision_at_1000
value: 0.11499999999999999
- type: precision_at_3
value: 32.413
- type: precision_at_5
value: 20.138
- type: recall_at_1
value: 74.13499999999999
- type: recall_at_10
value: 93.215
- type: recall_at_100
value: 97.083
- type: recall_at_1000
value: 98.732
- type: recall_at_3
value: 88.79
- type: recall_at_5
value: 91.259
- task:
type: Retrieval
dataset:
name: MTEB FiQA2018
type: fiqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 18.298000000000002
- type: map_at_10
value: 29.901
- type: map_at_100
value: 31.528
- type: map_at_1000
value: 31.713
- type: map_at_3
value: 25.740000000000002
- type: map_at_5
value: 28.227999999999998
- type: mrr_at_1
value: 36.728
- type: mrr_at_10
value: 45.401
- type: mrr_at_100
value: 46.27
- type: mrr_at_1000
value: 46.315
- type: mrr_at_3
value: 42.978
- type: mrr_at_5
value: 44.29
- type: ndcg_at_1
value: 36.728
- type: ndcg_at_10
value: 37.456
- type: ndcg_at_100
value: 43.832
- type: ndcg_at_1000
value: 47
- type: ndcg_at_3
value: 33.694
- type: ndcg_at_5
value: 35.085
- type: precision_at_1
value: 36.728
- type: precision_at_10
value: 10.386
- type: precision_at_100
value: 1.701
- type: precision_at_1000
value: 0.22599999999999998
- type: precision_at_3
value: 22.479
- type: precision_at_5
value: 16.605
- type: recall_at_1
value: 18.298000000000002
- type: recall_at_10
value: 44.369
- type: recall_at_100
value: 68.098
- type: recall_at_1000
value: 87.21900000000001
- type: recall_at_3
value: 30.215999999999998
- type: recall_at_5
value: 36.861
- task:
type: Retrieval
dataset:
name: MTEB HotpotQA
type: hotpotqa
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 39.568
- type: map_at_10
value: 65.061
- type: map_at_100
value: 65.896
- type: map_at_1000
value: 65.95100000000001
- type: map_at_3
value: 61.831
- type: map_at_5
value: 63.849000000000004
- type: mrr_at_1
value: 79.136
- type: mrr_at_10
value: 84.58200000000001
- type: mrr_at_100
value: 84.765
- type: mrr_at_1000
value: 84.772
- type: mrr_at_3
value: 83.684
- type: mrr_at_5
value: 84.223
- type: ndcg_at_1
value: 79.136
- type: ndcg_at_10
value: 72.622
- type: ndcg_at_100
value: 75.539
- type: ndcg_at_1000
value: 76.613
- type: ndcg_at_3
value: 68.065
- type: ndcg_at_5
value: 70.58
- type: precision_at_1
value: 79.136
- type: precision_at_10
value: 15.215
- type: precision_at_100
value: 1.7500000000000002
- type: precision_at_1000
value: 0.189
- type: precision_at_3
value: 44.011
- type: precision_at_5
value: 28.388999999999996
- type: recall_at_1
value: 39.568
- type: recall_at_10
value: 76.077
- type: recall_at_100
value: 87.481
- type: recall_at_1000
value: 94.56400000000001
- type: recall_at_3
value: 66.01599999999999
- type: recall_at_5
value: 70.97200000000001
- task:
type: Classification
dataset:
name: MTEB ImdbClassification
type: mteb/imdb
config: default
split: test
revision: 3d86128a09e091d6018b6d26cad27f2739fc2db7
metrics:
- type: accuracy
value: 85.312
- type: ap
value: 80.36296867333715
- type: f1
value: 85.26613311552218
- task:
type: Retrieval
dataset:
name: MTEB MSMARCO
type: msmarco
config: default
split: dev
revision: None
metrics:
- type: map_at_1
value: 23.363999999999997
- type: map_at_10
value: 35.711999999999996
- type: map_at_100
value: 36.876999999999995
- type: map_at_1000
value: 36.923
- type: map_at_3
value: 32.034
- type: map_at_5
value: 34.159
- type: mrr_at_1
value: 24.04
- type: mrr_at_10
value: 36.345
- type: mrr_at_100
value: 37.441
- type: mrr_at_1000
value: 37.480000000000004
- type: mrr_at_3
value: 32.713
- type: mrr_at_5
value: 34.824
- type: ndcg_at_1
value: 24.026
- type: ndcg_at_10
value: 42.531
- type: ndcg_at_100
value: 48.081
- type: ndcg_at_1000
value: 49.213
- type: ndcg_at_3
value: 35.044
- type: ndcg_at_5
value: 38.834
- type: precision_at_1
value: 24.026
- type: precision_at_10
value: 6.622999999999999
- type: precision_at_100
value: 0.941
- type: precision_at_1000
value: 0.104
- type: precision_at_3
value: 14.909
- type: precision_at_5
value: 10.871
- type: recall_at_1
value: 23.363999999999997
- type: recall_at_10
value: 63.426
- type: recall_at_100
value: 88.96300000000001
- type: recall_at_1000
value: 97.637
- type: recall_at_3
value: 43.095
- type: recall_at_5
value: 52.178000000000004
- task:
type: Classification
dataset:
name: MTEB MTOPDomainClassification (en)
type: mteb/mtop_domain
config: en
split: test
revision: d80d48c1eb48d3562165c59d59d0034df9fff0bf
metrics:
- type: accuracy
value: 93.0095759233926
- type: f1
value: 92.78387794667408
- task:
type: Classification
dataset:
name: MTEB MTOPIntentClassification (en)
type: mteb/mtop_intent
config: en
split: test
revision: ae001d0e6b1228650b7bd1c2c65fb50ad11a8aba
metrics:
- type: accuracy
value: 75.0296397628819
- type: f1
value: 58.45699589820874
- task:
type: Classification
dataset:
name: MTEB MassiveIntentClassification (en)
type: mteb/amazon_massive_intent
config: en
split: test
revision: 31efe3c427b0bae9c22cbb560b8f15491cc6bed7
metrics:
- type: accuracy
value: 73.45662407531944
- type: f1
value: 71.42364781421813
- task:
type: Classification
dataset:
name: MTEB MassiveScenarioClassification (en)
type: mteb/amazon_massive_scenario
config: en
split: test
revision: 7d571f92784cd94a019292a1f45445077d0ef634
metrics:
- type: accuracy
value: 77.07800941492937
- type: f1
value: 77.22799045640845
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringP2P
type: mteb/medrxiv-clustering-p2p
config: default
split: test
revision: e7a26af6f3ae46b30dde8737f02c07b1505bcc73
metrics:
- type: v_measure
value: 34.531234379250606
- task:
type: Clustering
dataset:
name: MTEB MedrxivClusteringS2S
type: mteb/medrxiv-clustering-s2s
config: default
split: test
revision: 35191c8c0dca72d8ff3efcd72aa802307d469663
metrics:
- type: v_measure
value: 30.941490381193802
- task:
type: Reranking
dataset:
name: MTEB MindSmallReranking
type: mteb/mind_small
config: default
split: test
revision: 3bdac13927fdc888b903db93b2ffdbd90b295a69
metrics:
- type: map
value: 30.3115090856725
- type: mrr
value: 31.290667638675757
- task:
type: Retrieval
dataset:
name: MTEB NFCorpus
type: nfcorpus
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 5.465
- type: map_at_10
value: 13.03
- type: map_at_100
value: 16.057
- type: map_at_1000
value: 17.49
- type: map_at_3
value: 9.553
- type: map_at_5
value: 11.204
- type: mrr_at_1
value: 43.653
- type: mrr_at_10
value: 53.269
- type: mrr_at_100
value: 53.72
- type: mrr_at_1000
value: 53.761
- type: mrr_at_3
value: 50.929
- type: mrr_at_5
value: 52.461
- type: ndcg_at_1
value: 42.26
- type: ndcg_at_10
value: 34.673
- type: ndcg_at_100
value: 30.759999999999998
- type: ndcg_at_1000
value: 39.728
- type: ndcg_at_3
value: 40.349000000000004
- type: ndcg_at_5
value: 37.915
- type: precision_at_1
value: 43.653
- type: precision_at_10
value: 25.789
- type: precision_at_100
value: 7.754999999999999
- type: precision_at_1000
value: 2.07
- type: precision_at_3
value: 38.596000000000004
- type: precision_at_5
value: 33.251
- type: recall_at_1
value: 5.465
- type: recall_at_10
value: 17.148
- type: recall_at_100
value: 29.768
- type: recall_at_1000
value: 62.239
- type: recall_at_3
value: 10.577
- type: recall_at_5
value: 13.315
- task:
type: Retrieval
dataset:
name: MTEB NQ
type: nq
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 37.008
- type: map_at_10
value: 52.467
- type: map_at_100
value: 53.342999999999996
- type: map_at_1000
value: 53.366
- type: map_at_3
value: 48.412
- type: map_at_5
value: 50.875
- type: mrr_at_1
value: 41.541
- type: mrr_at_10
value: 54.967
- type: mrr_at_100
value: 55.611
- type: mrr_at_1000
value: 55.627
- type: mrr_at_3
value: 51.824999999999996
- type: mrr_at_5
value: 53.763000000000005
- type: ndcg_at_1
value: 41.541
- type: ndcg_at_10
value: 59.724999999999994
- type: ndcg_at_100
value: 63.38700000000001
- type: ndcg_at_1000
value: 63.883
- type: ndcg_at_3
value: 52.331
- type: ndcg_at_5
value: 56.327000000000005
- type: precision_at_1
value: 41.541
- type: precision_at_10
value: 9.447
- type: precision_at_100
value: 1.1520000000000001
- type: precision_at_1000
value: 0.12
- type: precision_at_3
value: 23.262
- type: precision_at_5
value: 16.314999999999998
- type: recall_at_1
value: 37.008
- type: recall_at_10
value: 79.145
- type: recall_at_100
value: 94.986
- type: recall_at_1000
value: 98.607
- type: recall_at_3
value: 60.277
- type: recall_at_5
value: 69.407
- task:
type: Retrieval
dataset:
name: MTEB QuoraRetrieval
type: quora
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 70.402
- type: map_at_10
value: 84.181
- type: map_at_100
value: 84.796
- type: map_at_1000
value: 84.81400000000001
- type: map_at_3
value: 81.209
- type: map_at_5
value: 83.085
- type: mrr_at_1
value: 81.02000000000001
- type: mrr_at_10
value: 87.263
- type: mrr_at_100
value: 87.36
- type: mrr_at_1000
value: 87.36
- type: mrr_at_3
value: 86.235
- type: mrr_at_5
value: 86.945
- type: ndcg_at_1
value: 81.01
- type: ndcg_at_10
value: 87.99900000000001
- type: ndcg_at_100
value: 89.217
- type: ndcg_at_1000
value: 89.33
- type: ndcg_at_3
value: 85.053
- type: ndcg_at_5
value: 86.703
- type: precision_at_1
value: 81.01
- type: precision_at_10
value: 13.336
- type: precision_at_100
value: 1.52
- type: precision_at_1000
value: 0.156
- type: precision_at_3
value: 37.14
- type: precision_at_5
value: 24.44
- type: recall_at_1
value: 70.402
- type: recall_at_10
value: 95.214
- type: recall_at_100
value: 99.438
- type: recall_at_1000
value: 99.928
- type: recall_at_3
value: 86.75699999999999
- type: recall_at_5
value: 91.44099999999999
- task:
type: Clustering
dataset:
name: MTEB RedditClustering
type: mteb/reddit-clustering
config: default
split: test
revision: 24640382cdbf8abc73003fb0fa6d111a705499eb
metrics:
- type: v_measure
value: 56.51721502758904
- task:
type: Clustering
dataset:
name: MTEB RedditClusteringP2P
type: mteb/reddit-clustering-p2p
config: default
split: test
revision: 282350215ef01743dc01b456c7f5241fa8937f16
metrics:
- type: v_measure
value: 61.054808572333016
- task:
type: Retrieval
dataset:
name: MTEB SCIDOCS
type: scidocs
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 4.578
- type: map_at_10
value: 11.036999999999999
- type: map_at_100
value: 12.879999999999999
- type: map_at_1000
value: 13.150999999999998
- type: map_at_3
value: 8.133
- type: map_at_5
value: 9.559
- type: mrr_at_1
value: 22.6
- type: mrr_at_10
value: 32.68
- type: mrr_at_100
value: 33.789
- type: mrr_at_1000
value: 33.854
- type: mrr_at_3
value: 29.7
- type: mrr_at_5
value: 31.480000000000004
- type: ndcg_at_1
value: 22.6
- type: ndcg_at_10
value: 18.616
- type: ndcg_at_100
value: 25.883
- type: ndcg_at_1000
value: 30.944
- type: ndcg_at_3
value: 18.136
- type: ndcg_at_5
value: 15.625
- type: precision_at_1
value: 22.6
- type: precision_at_10
value: 9.48
- type: precision_at_100
value: 1.991
- type: precision_at_1000
value: 0.321
- type: precision_at_3
value: 16.8
- type: precision_at_5
value: 13.54
- type: recall_at_1
value: 4.578
- type: recall_at_10
value: 19.213
- type: recall_at_100
value: 40.397
- type: recall_at_1000
value: 65.2
- type: recall_at_3
value: 10.208
- type: recall_at_5
value: 13.718
- task:
type: STS
dataset:
name: MTEB SICK-R
type: mteb/sickr-sts
config: default
split: test
revision: a6ea5a8cab320b040a23452cc28066d9beae2cee
metrics:
- type: cos_sim_pearson
value: 83.44288351714071
- type: cos_sim_spearman
value: 79.37995604564952
- type: euclidean_pearson
value: 81.1078874670718
- type: euclidean_spearman
value: 79.37995905980499
- type: manhattan_pearson
value: 81.03697527288986
- type: manhattan_spearman
value: 79.33490235296236
- task:
type: STS
dataset:
name: MTEB STS12
type: mteb/sts12-sts
config: default
split: test
revision: a0d554a64d88156834ff5ae9920b964011b16384
metrics:
- type: cos_sim_pearson
value: 84.95557650436523
- type: cos_sim_spearman
value: 78.5190672399868
- type: euclidean_pearson
value: 81.58064025904707
- type: euclidean_spearman
value: 78.5190672399868
- type: manhattan_pearson
value: 81.52857930619889
- type: manhattan_spearman
value: 78.50421361308034
- task:
type: STS
dataset:
name: MTEB STS13
type: mteb/sts13-sts
config: default
split: test
revision: 7e90230a92c190f1bf69ae9002b8cea547a64cca
metrics:
- type: cos_sim_pearson
value: 84.79128416228737
- type: cos_sim_spearman
value: 86.05402451477147
- type: euclidean_pearson
value: 85.46280267054289
- type: euclidean_spearman
value: 86.05402451477147
- type: manhattan_pearson
value: 85.46278563858236
- type: manhattan_spearman
value: 86.08079590861004
- task:
type: STS
dataset:
name: MTEB STS14
type: mteb/sts14-sts
config: default
split: test
revision: 6031580fec1f6af667f0bd2da0a551cf4f0b2375
metrics:
- type: cos_sim_pearson
value: 83.20623089568763
- type: cos_sim_spearman
value: 81.53786907061009
- type: euclidean_pearson
value: 82.82272250091494
- type: euclidean_spearman
value: 81.53786907061009
- type: manhattan_pearson
value: 82.78850494027013
- type: manhattan_spearman
value: 81.5135618083407
- task:
type: STS
dataset:
name: MTEB STS15
type: mteb/sts15-sts
config: default
split: test
revision: ae752c7c21bf194d8b67fd573edf7ae58183cbe3
metrics:
- type: cos_sim_pearson
value: 85.46366618397936
- type: cos_sim_spearman
value: 86.96566013336908
- type: euclidean_pearson
value: 86.62651697548931
- type: euclidean_spearman
value: 86.96565526364454
- type: manhattan_pearson
value: 86.58812160258009
- type: manhattan_spearman
value: 86.9336484321288
- task:
type: STS
dataset:
name: MTEB STS16
type: mteb/sts16-sts
config: default
split: test
revision: 4d8694f8f0e0100860b497b999b3dbed754a0513
metrics:
- type: cos_sim_pearson
value: 82.51858358641559
- type: cos_sim_spearman
value: 84.7652527954999
- type: euclidean_pearson
value: 84.23914783766861
- type: euclidean_spearman
value: 84.7652527954999
- type: manhattan_pearson
value: 84.22749648503171
- type: manhattan_spearman
value: 84.74527996746386
- task:
type: STS
dataset:
name: MTEB STS17 (en-en)
type: mteb/sts17-crosslingual-sts
config: en-en
split: test
revision: af5e6fb845001ecf41f4c1e033ce921939a2a68d
metrics:
- type: cos_sim_pearson
value: 87.28026563313065
- type: cos_sim_spearman
value: 87.46928143824915
- type: euclidean_pearson
value: 88.30558762000372
- type: euclidean_spearman
value: 87.46928143824915
- type: manhattan_pearson
value: 88.10513330809331
- type: manhattan_spearman
value: 87.21069787834173
- task:
type: STS
dataset:
name: MTEB STS22 (en)
type: mteb/sts22-crosslingual-sts
config: en
split: test
revision: 6d1ba47164174a496b7fa5d3569dae26a6813b80
metrics:
- type: cos_sim_pearson
value: 62.376497134587375
- type: cos_sim_spearman
value: 65.0159550112516
- type: euclidean_pearson
value: 65.64572120879598
- type: euclidean_spearman
value: 65.0159550112516
- type: manhattan_pearson
value: 65.88143604989976
- type: manhattan_spearman
value: 65.17547297222434
- task:
type: STS
dataset:
name: MTEB STSBenchmark
type: mteb/stsbenchmark-sts
config: default
split: test
revision: b0fddb56ed78048fa8b90373c8a3cfc37b684831
metrics:
- type: cos_sim_pearson
value: 84.22876368947644
- type: cos_sim_spearman
value: 85.46935577445318
- type: euclidean_pearson
value: 85.32830231392005
- type: euclidean_spearman
value: 85.46935577445318
- type: manhattan_pearson
value: 85.30353211758495
- type: manhattan_spearman
value: 85.42821085956945
- task:
type: Reranking
dataset:
name: MTEB SciDocsRR
type: mteb/scidocs-reranking
config: default
split: test
revision: d3c5e1fc0b855ab6097bf1cda04dd73947d7caab
metrics:
- type: map
value: 80.60986667767133
- type: mrr
value: 94.29432314236236
- task:
type: Retrieval
dataset:
name: MTEB SciFact
type: scifact
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 54.528
- type: map_at_10
value: 65.187
- type: map_at_100
value: 65.62599999999999
- type: map_at_1000
value: 65.657
- type: map_at_3
value: 62.352
- type: map_at_5
value: 64.025
- type: mrr_at_1
value: 57.333
- type: mrr_at_10
value: 66.577
- type: mrr_at_100
value: 66.88
- type: mrr_at_1000
value: 66.908
- type: mrr_at_3
value: 64.556
- type: mrr_at_5
value: 65.739
- type: ndcg_at_1
value: 57.333
- type: ndcg_at_10
value: 70.275
- type: ndcg_at_100
value: 72.136
- type: ndcg_at_1000
value: 72.963
- type: ndcg_at_3
value: 65.414
- type: ndcg_at_5
value: 67.831
- type: precision_at_1
value: 57.333
- type: precision_at_10
value: 9.5
- type: precision_at_100
value: 1.057
- type: precision_at_1000
value: 0.11199999999999999
- type: precision_at_3
value: 25.778000000000002
- type: precision_at_5
value: 17.2
- type: recall_at_1
value: 54.528
- type: recall_at_10
value: 84.356
- type: recall_at_100
value: 92.833
- type: recall_at_1000
value: 99.333
- type: recall_at_3
value: 71.283
- type: recall_at_5
value: 77.14999999999999
- task:
type: PairClassification
dataset:
name: MTEB SprintDuplicateQuestions
type: mteb/sprintduplicatequestions-pairclassification
config: default
split: test
revision: d66bd1f72af766a5cc4b0ca5e00c162f89e8cc46
metrics:
- type: cos_sim_accuracy
value: 99.74158415841585
- type: cos_sim_ap
value: 92.90048959850317
- type: cos_sim_f1
value: 86.35650810245687
- type: cos_sim_precision
value: 90.4709748083242
- type: cos_sim_recall
value: 82.6
- type: dot_accuracy
value: 99.74158415841585
- type: dot_ap
value: 92.90048959850317
- type: dot_f1
value: 86.35650810245687
- type: dot_precision
value: 90.4709748083242
- type: dot_recall
value: 82.6
- type: euclidean_accuracy
value: 99.74158415841585
- type: euclidean_ap
value: 92.90048959850317
- type: euclidean_f1
value: 86.35650810245687
- type: euclidean_precision
value: 90.4709748083242
- type: euclidean_recall
value: 82.6
- type: manhattan_accuracy
value: 99.74158415841585
- type: manhattan_ap
value: 92.87344692947894
- type: manhattan_f1
value: 86.38497652582159
- type: manhattan_precision
value: 90.29443838604145
- type: manhattan_recall
value: 82.8
- type: max_accuracy
value: 99.74158415841585
- type: max_ap
value: 92.90048959850317
- type: max_f1
value: 86.38497652582159
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClustering
type: mteb/stackexchange-clustering
config: default
split: test
revision: 6cbc1f7b2bc0622f2e39d2c77fa502909748c259
metrics:
- type: v_measure
value: 63.191648770424216
- task:
type: Clustering
dataset:
name: MTEB StackExchangeClusteringP2P
type: mteb/stackexchange-clustering-p2p
config: default
split: test
revision: 815ca46b2622cec33ccafc3735d572c266efdb44
metrics:
- type: v_measure
value: 34.02944668730218
- task:
type: Reranking
dataset:
name: MTEB StackOverflowDupQuestions
type: mteb/stackoverflowdupquestions-reranking
config: default
split: test
revision: e185fbe320c72810689fc5848eb6114e1ef5ec69
metrics:
- type: map
value: 50.466386167525265
- type: mrr
value: 51.19071492233257
- task:
type: Summarization
dataset:
name: MTEB SummEval
type: mteb/summeval
config: default
split: test
revision: cda12ad7615edc362dbf25a00fdd61d3b1eaf93c
metrics:
- type: cos_sim_pearson
value: 30.198022505886435
- type: cos_sim_spearman
value: 30.40170257939193
- type: dot_pearson
value: 30.198015316402614
- type: dot_spearman
value: 30.40170257939193
- task:
type: Retrieval
dataset:
name: MTEB TRECCOVID
type: trec-covid
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 0.242
- type: map_at_10
value: 2.17
- type: map_at_100
value: 12.221
- type: map_at_1000
value: 28.63
- type: map_at_3
value: 0.728
- type: map_at_5
value: 1.185
- type: mrr_at_1
value: 94
- type: mrr_at_10
value: 97
- type: mrr_at_100
value: 97
- type: mrr_at_1000
value: 97
- type: mrr_at_3
value: 97
- type: mrr_at_5
value: 97
- type: ndcg_at_1
value: 89
- type: ndcg_at_10
value: 82.30499999999999
- type: ndcg_at_100
value: 61.839999999999996
- type: ndcg_at_1000
value: 53.381
- type: ndcg_at_3
value: 88.877
- type: ndcg_at_5
value: 86.05199999999999
- type: precision_at_1
value: 94
- type: precision_at_10
value: 87
- type: precision_at_100
value: 63.38
- type: precision_at_1000
value: 23.498
- type: precision_at_3
value: 94
- type: precision_at_5
value: 92
- type: recall_at_1
value: 0.242
- type: recall_at_10
value: 2.302
- type: recall_at_100
value: 14.979000000000001
- type: recall_at_1000
value: 49.638
- type: recall_at_3
value: 0.753
- type: recall_at_5
value: 1.226
- task:
type: Retrieval
dataset:
name: MTEB Touche2020
type: webis-touche2020
config: default
split: test
revision: None
metrics:
- type: map_at_1
value: 3.006
- type: map_at_10
value: 11.805
- type: map_at_100
value: 18.146
- type: map_at_1000
value: 19.788
- type: map_at_3
value: 5.914
- type: map_at_5
value: 8.801
- type: mrr_at_1
value: 40.816
- type: mrr_at_10
value: 56.36600000000001
- type: mrr_at_100
value: 56.721999999999994
- type: mrr_at_1000
value: 56.721999999999994
- type: mrr_at_3
value: 52.041000000000004
- type: mrr_at_5
value: 54.796
- type: ndcg_at_1
value: 37.755
- type: ndcg_at_10
value: 29.863
- type: ndcg_at_100
value: 39.571
- type: ndcg_at_1000
value: 51.385999999999996
- type: ndcg_at_3
value: 32.578
- type: ndcg_at_5
value: 32.351
- type: precision_at_1
value: 40.816
- type: precision_at_10
value: 26.531
- type: precision_at_100
value: 7.796
- type: precision_at_1000
value: 1.555
- type: precision_at_3
value: 32.653
- type: precision_at_5
value: 33.061
- type: recall_at_1
value: 3.006
- type: recall_at_10
value: 18.738
- type: recall_at_100
value: 48.058
- type: recall_at_1000
value: 83.41300000000001
- type: recall_at_3
value: 7.166
- type: recall_at_5
value: 12.102
- task:
type: Classification
dataset:
name: MTEB ToxicConversationsClassification
type: mteb/toxic_conversations_50k
config: default
split: test
revision: d7c0de2777da35d6aae2200a62c6e0e5af397c4c
metrics:
- type: accuracy
value: 71.4178
- type: ap
value: 14.648781342150446
- type: f1
value: 55.07299194946378
- task:
type: Classification
dataset:
name: MTEB TweetSentimentExtractionClassification
type: mteb/tweet_sentiment_extraction
config: default
split: test
revision: d604517c81ca91fe16a244d1248fc021f9ecee7a
metrics:
- type: accuracy
value: 60.919637804187886
- type: f1
value: 61.24122013967399
- task:
type: Clustering
dataset:
name: MTEB TwentyNewsgroupsClustering
type: mteb/twentynewsgroups-clustering
config: default
split: test
revision: 6125ec4e24fa026cec8a478383ee943acfbd5449
metrics:
- type: v_measure
value: 49.207896583685695
- task:
type: PairClassification
dataset:
name: MTEB TwitterSemEval2015
type: mteb/twittersemeval2015-pairclassification
config: default
split: test
revision: 70970daeab8776df92f5ea462b6173c0b46fd2d1
metrics:
- type: cos_sim_accuracy
value: 86.23114978840078
- type: cos_sim_ap
value: 74.26624727825818
- type: cos_sim_f1
value: 68.72377190817083
- type: cos_sim_precision
value: 64.56400742115028
- type: cos_sim_recall
value: 73.45646437994723
- type: dot_accuracy
value: 86.23114978840078
- type: dot_ap
value: 74.26624032659652
- type: dot_f1
value: 68.72377190817083
- type: dot_precision
value: 64.56400742115028
- type: dot_recall
value: 73.45646437994723
- type: euclidean_accuracy
value: 86.23114978840078
- type: euclidean_ap
value: 74.26624714480556
- type: euclidean_f1
value: 68.72377190817083
- type: euclidean_precision
value: 64.56400742115028
- type: euclidean_recall
value: 73.45646437994723
- type: manhattan_accuracy
value: 86.16558383501221
- type: manhattan_ap
value: 74.2091943976357
- type: manhattan_f1
value: 68.64221520524654
- type: manhattan_precision
value: 63.59135913591359
- type: manhattan_recall
value: 74.5646437994723
- type: max_accuracy
value: 86.23114978840078
- type: max_ap
value: 74.26624727825818
- type: max_f1
value: 68.72377190817083
- task:
type: PairClassification
dataset:
name: MTEB TwitterURLCorpus
type: mteb/twitterurlcorpus-pairclassification
config: default
split: test
revision: 8b6510b0b1fa4e4c4f879467980e9be563ec1cdf
metrics:
- type: cos_sim_accuracy
value: 89.3681841114604
- type: cos_sim_ap
value: 86.65166387498546
- type: cos_sim_f1
value: 79.02581944698774
- type: cos_sim_precision
value: 75.35796605434099
- type: cos_sim_recall
value: 83.06898675700647
- type: dot_accuracy
value: 89.3681841114604
- type: dot_ap
value: 86.65166019802056
- type: dot_f1
value: 79.02581944698774
- type: dot_precision
value: 75.35796605434099
- type: dot_recall
value: 83.06898675700647
- type: euclidean_accuracy
value: 89.3681841114604
- type: euclidean_ap
value: 86.65166462876266
- type: euclidean_f1
value: 79.02581944698774
- type: euclidean_precision
value: 75.35796605434099
- type: euclidean_recall
value: 83.06898675700647
- type: manhattan_accuracy
value: 89.36624364497226
- type: manhattan_ap
value: 86.65076471274106
- type: manhattan_f1
value: 79.07408783532733
- type: manhattan_precision
value: 76.41102972856527
- type: manhattan_recall
value: 81.92947336002464
- type: max_accuracy
value: 89.3681841114604
- type: max_ap
value: 86.65166462876266
- type: max_f1
value: 79.07408783532733
---
# nomic-embed-text-v1.5: Resizable Production Embeddings with Matryoshka Representation Learning
**Exciting Update!**: `nomic-embed-text-v1.5` is now multimodal! [nomic-embed-vision-v1](https://huggingface.co/nomic-ai/nomic-embed-vision-v1.5) is aligned to the embedding space of `nomic-embed-text-v1.5`, meaning any text embedding is multimodal!
## Usage
**Important**: the text prompt *must* include a *task instruction prefix*, instructing the model which task is being performed.
For example, if you are implementing a RAG application, you embed your documents as `search_document: <text here>` and embed your user queries as `search_query: <text here>`.
## Task instruction prefixes
### `search_document`
#### Purpose: embed texts as documents from a dataset
This prefix is used for embedding texts as documents, for example as documents for a RAG index.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['search_document: TSNE is a dimensionality reduction algorithm created by Laurens van Der Maaten']
embeddings = model.encode(sentences)
print(embeddings)
```
### `search_query`
#### Purpose: embed texts as questions to answer
This prefix is used for embedding texts as questions that documents from a dataset could resolve, for example as queries to be answered by a RAG application.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['search_query: Who is Laurens van Der Maaten?']
embeddings = model.encode(sentences)
print(embeddings)
```
### `clustering`
#### Purpose: embed texts to group them into clusters
This prefix is used for embedding texts in order to group them into clusters, discover common topics, or remove semantic duplicates.
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['clustering: the quick brown fox']
embeddings = model.encode(sentences)
print(embeddings)
```
### `classification`
#### Purpose: embed texts to classify them
This prefix is used for embedding texts into vectors that will be used as features for a classification model
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1", trust_remote_code=True)
sentences = ['classification: the quick brown fox']
embeddings = model.encode(sentences)
print(embeddings)
```
### Sentence Transformers
```python
import torch.nn.functional as F
from sentence_transformers import SentenceTransformer
matryoshka_dim = 512
model = SentenceTransformer("nomic-ai/nomic-embed-text-v1.5", trust_remote_code=True)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
embeddings = model.encode(sentences, convert_to_tensor=True)
embeddings = F.layer_norm(embeddings, normalized_shape=(embeddings.shape[1],))
embeddings = embeddings[:, :matryoshka_dim]
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings)
```
### Transformers
```diff
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
sentences = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?']
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1.5', trust_remote_code=True, safe_serialization=True)
model.eval()
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
+ matryoshka_dim = 512
with torch.no_grad():
model_output = model(**encoded_input)
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
+ embeddings = F.layer_norm(embeddings, normalized_shape=(embeddings.shape[1],))
+ embeddings = embeddings[:, :matryoshka_dim]
embeddings = F.normalize(embeddings, p=2, dim=1)
print(embeddings)
```
The model natively supports scaling of the sequence length past 2048 tokens. To do so,
```diff
- tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
+ tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased', model_max_length=8192)
- model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True)
+ model = AutoModel.from_pretrained('nomic-ai/nomic-embed-text-v1', trust_remote_code=True, rotary_scaling_factor=2)
```
### Transformers.js
```js
import { pipeline, layer_norm } from '@xenova/transformers';
// Create a feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'nomic-ai/nomic-embed-text-v1.5', {
quantized: false, // Comment out this line to use the quantized version
});
// Define sentences
const texts = ['search_query: What is TSNE?', 'search_query: Who is Laurens van der Maaten?'];
// Compute sentence embeddings
let embeddings = await extractor(texts, { pooling: 'mean' });
console.log(embeddings); // Tensor of shape [2, 768]
const matryoshka_dim = 512;
embeddings = layer_norm(embeddings, [embeddings.dims[1]])
.slice(null, [0, matryoshka_dim])
.normalize(2, -1);
console.log(embeddings.tolist());
```
## Nomic API
The easiest way to use Nomic Embed is through the Nomic Embedding API.
Generating embeddings with the `nomic` Python client is as easy as
```python
from nomic import embed
output = embed.text(
texts=['Nomic Embedding API', '#keepAIOpen'],
model='nomic-embed-text-v1.5',
task_type='search_document',
dimensionality=256,
)
print(output)
```
For more information, see the [API reference](https://docs.nomic.ai/reference/endpoints/nomic-embed-text)
## Infinity
Usage with [Infinity](https://github.com/michaelfeil/infinity).
```bash
docker run --gpus all -v $PWD/data:/app/.cache -e HF_TOKEN=$HF_TOKEN -p "7997":"7997" \
michaelf34/infinity:0.0.70 \
v2 --model-id nomic-ai/nomic-embed-text-v1.5 --revision "main" --dtype float16 --batch-size 8 --engine torch --port 7997 --no-bettertransformer
```
## Adjusting Dimensionality
`nomic-embed-text-v1.5` is an improvement upon [Nomic Embed](https://huggingface.co/nomic-ai/nomic-embed-text-v1) that utilizes [Matryoshka Representation Learning](https://arxiv.org/abs/2205.13147) which gives developers the flexibility to trade off the embedding size for a negligible reduction in performance.
| Name | SeqLen | Dimension | MTEB |
| :-------------------------------:| :----- | :-------- | :------: |
| nomic-embed-text-v1 | 8192 | 768 | **62.39** |
| nomic-embed-text-v1.5 | 8192 | 768 | 62.28 |
| nomic-embed-text-v1.5 | 8192 | 512 | 61.96 |
| nomic-embed-text-v1.5 | 8192 | 256 | 61.04 |
| nomic-embed-text-v1.5 | 8192 | 128 | 59.34 |
| nomic-embed-text-v1.5 | 8192 | 64 | 56.10 |

## Training
Click the Nomic Atlas map below to visualize a 5M sample of our contrastive pretraining data!
[](https://atlas.nomic.ai/map/nomic-text-embed-v1-5m-sample)
We train our embedder using a multi-stage training pipeline. Starting from a long-context [BERT model](https://huggingface.co/nomic-ai/nomic-bert-2048),
the first unsupervised contrastive stage trains on a dataset generated from weakly related text pairs, such as question-answer pairs from forums like StackExchange and Quora, title-body pairs from Amazon reviews, and summarizations from news articles.
In the second finetuning stage, higher quality labeled datasets such as search queries and answers from web searches are leveraged. Data curation and hard-example mining is crucial in this stage.
For more details, see the Nomic Embed [Technical Report](https://static.nomic.ai/reports/2024_Nomic_Embed_Text_Technical_Report.pdf) and corresponding [blog post](https://blog.nomic.ai/posts/nomic-embed-matryoshka).
Training data to train the models is released in its entirety. For more details, see the `contrastors` [repository](https://github.com/nomic-ai/contrastors)
# Join the Nomic Community
- Nomic: [https://nomic.ai](https://nomic.ai)
- Discord: [https://discord.gg/myY5YDR8z8](https://discord.gg/myY5YDR8z8)
- Twitter: [https://twitter.com/nomic_ai](https://twitter.com/nomic_ai)
# Citation
If you find the model, dataset, or training code useful, please cite our work
```bibtex
@misc{nussbaum2024nomic,
title={Nomic Embed: Training a Reproducible Long Context Text Embedder},
author={Zach Nussbaum and John X. Morris and Brandon Duderstadt and Andriy Mulyar},
year={2024},
eprint={2402.01613},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | [
"SUMMARIZATION"
] | [
"BIOSSES",
"SCIFACT"
] |
Subsets and Splits